• Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA

    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs.
    Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online.
    At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees.
    3D Digital Twins and AI Transform Marketing, Advertising and Product Design
    The meeting of generative AI and 3D product digital twins results in unlimited creative potential.
    Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels.
    The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch.
    Image courtesy of Nestlé
    The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure.
    Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands.
    LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy.
    The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale.
    The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation.
    Image courtesy of Grip
    L’Oréal Gives Marketing and Online Shopping an AI Makeover
    Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI.
    L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines.
    “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.”
    CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences.
    The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates.

    Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products.
    Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare.
    “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.” 

    The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure.
    Rapid Innovation With the NVIDIA Partner Ecosystem
    NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI.
    Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference.
    AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need.
    The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale.
    Physical AI Brings Acceleration to Supply Chain and Logistics
    AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%.
    Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments.
    Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers.
    From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations.
    Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #retail #reboot #major #global #brands
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #retail #reboot #major #global #brands
    BLOGS.NVIDIA.COM
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goods (CPG) industries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  https://blogs.nvidia.com/wp-content/uploads/2025/06/Noli_Demo.mp4 The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Sad
    Wow
    Angry
    23
    0 Commentarii 0 Distribuiri
  • Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid

    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand.
    Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation.
    At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics.
    Future use cases for AEON include:

    Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio.
    Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings.
    Part inspection, which includes checking parts for defects or ensuring adherence to specifications.
    Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners.

    “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.”

    Using NVIDIA’s Three Computers to Develop AEON 
    To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models.
    Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations.
    AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning.


    This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment.
    In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation.
    “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.”
    Data Comes to Life Through Reality Capture and Omniverse Integration 
    AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas.

    Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure.
    “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.”
    AEON’s Next Steps
    By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON.
    This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data.
    Watch the Hexagon LIVE keynote, explore presentations and read more about AEON.
    All imagery courtesy of Hexagon.
    #hexagon #taps #nvidia #robotics #software
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Realityplatform powering Hexagon Reality Cloud Studio. Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon. #hexagon #taps #nvidia #robotics #software
    BLOGS.NVIDIA.COM
    Hexagon Taps NVIDIA Robotics and AI Software to Build and Deploy AEON, a New Humanoid
    As a global labor shortage leaves 50 million positions unfilled across industries like manufacturing and logistics, Hexagon — a global leader in measurement technologies — is developing humanoid robots that can lend a helping hand. Industrial sectors depend on skilled workers to perform a variety of error-prone tasks, including operating high-precision scanners for reality capture — the process of capturing digital data to replicate the real world in simulation. At the Hexagon LIVE Global conference, Hexagon’s robotics division today unveiled AEON — a new humanoid robot built in collaboration with NVIDIA that’s engineered to perform a wide range of industrial applications, from manipulation and asset inspection to reality capture and operator support. Hexagon plans to deploy AEON across automotive, transportation, aerospace, manufacturing, warehousing and logistics. Future use cases for AEON include: Reality capture, which involves automatic planning and then scanning of assets, industrial spaces and environments to generate 3D models. The captured data is then used for advanced visualization and collaboration in the Hexagon Digital Reality (HxDR) platform powering Hexagon Reality Cloud Studio (RCS). Manipulation tasks, such as sorting and moving parts in various industrial and manufacturing settings. Part inspection, which includes checking parts for defects or ensuring adherence to specifications. Industrial operations, including highly dexterous technical tasks like machinery operations, teleoperation and scanning parts using high-end scanners. “The age of general-purpose robotics has arrived, due to technological advances in simulation and physical AI,” said Deepu Talla, vice president of robotics and edge AI at NVIDIA. “Hexagon’s new AEON humanoid embodies the integration of NVIDIA’s three-computer robotics platform and is making a significant leap forward in addressing industry-critical challenges.” Using NVIDIA’s Three Computers to Develop AEON  To build AEON, Hexagon used NVIDIA’s three computers for developing and deploying physical AI systems. They include AI supercomputers to train and fine-tune powerful foundation models; the NVIDIA Omniverse platform, running on NVIDIA OVX servers, for testing and optimizing these models in simulation environments using real and physically based synthetic data; and NVIDIA IGX Thor robotic computers to run the models. Hexagon is exploring using NVIDIA accelerated computing to post-train the NVIDIA Isaac GR00T N1.5 open foundation model to improve robot reasoning and policies, and tapping Isaac GR00T-Mimic to generate vast amounts of synthetic motion data from a few human demonstrations. AEON learns many of its skills through simulations powered by the NVIDIA Isaac platform. Hexagon uses NVIDIA Isaac Sim, a reference robotic simulation application built on Omniverse, to simulate complex robot actions like navigation, locomotion and manipulation. These skills are then refined using reinforcement learning in NVIDIA Isaac Lab, an open-source framework for robot learning. https://blogs.nvidia.com/wp-content/uploads/2025/06/Copy-of-robotics-hxgn-live-blog-1920x1080-1.mp4 This simulation-first approach enabled Hexagon to fast-track its robotic development, allowing AEON to master core locomotion skills in just 2-3 weeks — rather than 5-6 months — before real-world deployment. In addition, AEON taps into NVIDIA Jetson Orin onboard computers to autonomously move, navigate and perform its tasks in real time, enhancing its speed and accuracy while operating in complex and dynamic environments. Hexagon is also planning to upgrade AEON with NVIDIA IGX Thor to enable functional safety for collaborative operation. “Our goal with AEON was to design an intelligent, autonomous humanoid that addresses the real-world challenges industrial leaders have shared with us over the past months,” said Arnaud Robert, president of Hexagon’s robotics division. “By leveraging NVIDIA’s full-stack robotics and simulation platforms, we were able to deliver a best-in-class humanoid that combines advanced mechatronics, multimodal sensor fusion and real-time AI.” Data Comes to Life Through Reality Capture and Omniverse Integration  AEON will be piloted in factories and warehouses to scan everything from small precision parts and automotive components to large assembly lines and storage areas. Captured data comes to life in RCS, a platform that allows users to collaborate, visualize and share reality-capture data by tapping into HxDR and NVIDIA Omniverse running in the cloud. This removes the constraint of local infrastructure. “Digital twins offer clear advantages, but adoption has been challenging in several industries,” said Lucas Heinzle, vice president of research and development at Hexagon’s robotics division. “AEON’s sophisticated sensor suite enables the integration of reality data capture with NVIDIA Omniverse, streamlining workflows for our customers and moving us closer to making digital twins a mainstream tool for collaboration and innovation.” AEON’s Next Steps By adopting the OpenUSD framework and developing on Omniverse, Hexagon can generate high-fidelity digital twins from scanned data — establishing a data flywheel to continuously train AEON. This latest work with Hexagon is helping shape the future of physical AI — delivering scalable, efficient solutions to address the challenges faced by industries that depend on capturing real-world data. Watch the Hexagon LIVE keynote, explore presentations and read more about AEON. All imagery courtesy of Hexagon.
    Like
    Love
    Wow
    Sad
    Angry
    38
    0 Commentarii 0 Distribuiri
  • Well, folks, it’s finally happened: Microsoft has teamed up with Asus to bless us with the “ROG Xbox Ally range” — yes, that’s right, the first Xbox handhelds have arrived! Because clearly, we were all just waiting for the day when we could play Halo on a device that fits in our pockets. Who needs a console at home when you can have a mini Xbox that can barely fit alongside your keys and loose change?

    Let’s take a moment to appreciate the sheer brilliance of this innovation. After years of gaming on a screen that’s bigger than your average coffee table, now you can squint at a miniature version of the Xbox screen while sitting on the bus. Who needs comfort and relaxation when you can sacrifice your eyesight for the sake of portability? Forget about the stress of lugging around your gaming setup; now you can just carry a glorified remote control!

    And how about that collaboration with Asus? Because when I think of epic gaming experiences, I definitely think of a partnership that sounds like it was cooked up in a boardroom over a cold cup of coffee. “What if we took the weight of a console and squeezed it into a device that feels like a brick?” Genius! The name “ROG Xbox Ally” even sounds like it was generated by an AI trying too hard to sound cool. “ROG” is obviously for “Really Over-the-Top Gaming,” and “Ally” is just the polite way of saying, “We’re in this mess together.”

    Let’s not overlook the fact that the last thing we needed in our lives was another device to charge. Who doesn’t love the thrill of realizing you forgot to plug in your handheld Xbox after a long day at work? Nothing screams “gaming freedom” quite like being tethered to a wall outlet while your friends are enjoying epic multiplayer sessions. Who wouldn’t want to take their gaming experience to the next level of inconvenience?

    Speaking of multiplayer, you can bet that those intense gaming sessions will be even more fun when you’re all huddled together, squinting at these tiny screens, trying to figure out how to communicate when half your friends can’t even see the action happening. It’s a whole new level of bonding, folks! “Did I just shoot you, or was that the guy on my left? Let’s argue about it while we all strain our necks to see the screen.”

    In conclusion, as we welcome the ROG Xbox Ally range into our lives, let’s take a moment to appreciate the madness of this handheld revolution. If you’ve ever dreamed of playing your favorite Xbox games on a device that feels like a high-tech paperweight, then congratulations! The future is here, and it’s as absurd as it sounds. Remember, gaming isn’t just about playing; it’s about how creatively we can inconvenience ourselves while doing so.

    #ROGXboxAlly #XboxHandheld #GamingInnovation #PortableGaming #TechHumor
    Well, folks, it’s finally happened: Microsoft has teamed up with Asus to bless us with the “ROG Xbox Ally range” — yes, that’s right, the first Xbox handhelds have arrived! Because clearly, we were all just waiting for the day when we could play Halo on a device that fits in our pockets. Who needs a console at home when you can have a mini Xbox that can barely fit alongside your keys and loose change? Let’s take a moment to appreciate the sheer brilliance of this innovation. After years of gaming on a screen that’s bigger than your average coffee table, now you can squint at a miniature version of the Xbox screen while sitting on the bus. Who needs comfort and relaxation when you can sacrifice your eyesight for the sake of portability? Forget about the stress of lugging around your gaming setup; now you can just carry a glorified remote control! And how about that collaboration with Asus? Because when I think of epic gaming experiences, I definitely think of a partnership that sounds like it was cooked up in a boardroom over a cold cup of coffee. “What if we took the weight of a console and squeezed it into a device that feels like a brick?” Genius! The name “ROG Xbox Ally” even sounds like it was generated by an AI trying too hard to sound cool. “ROG” is obviously for “Really Over-the-Top Gaming,” and “Ally” is just the polite way of saying, “We’re in this mess together.” Let’s not overlook the fact that the last thing we needed in our lives was another device to charge. Who doesn’t love the thrill of realizing you forgot to plug in your handheld Xbox after a long day at work? Nothing screams “gaming freedom” quite like being tethered to a wall outlet while your friends are enjoying epic multiplayer sessions. Who wouldn’t want to take their gaming experience to the next level of inconvenience? Speaking of multiplayer, you can bet that those intense gaming sessions will be even more fun when you’re all huddled together, squinting at these tiny screens, trying to figure out how to communicate when half your friends can’t even see the action happening. It’s a whole new level of bonding, folks! “Did I just shoot you, or was that the guy on my left? Let’s argue about it while we all strain our necks to see the screen.” In conclusion, as we welcome the ROG Xbox Ally range into our lives, let’s take a moment to appreciate the madness of this handheld revolution. If you’ve ever dreamed of playing your favorite Xbox games on a device that feels like a high-tech paperweight, then congratulations! The future is here, and it’s as absurd as it sounds. Remember, gaming isn’t just about playing; it’s about how creatively we can inconvenience ourselves while doing so. #ROGXboxAlly #XboxHandheld #GamingInnovation #PortableGaming #TechHumor
    The first Xbox handhelds have finally arrived
    The ROG Xbox Ally range has been developed by Microsoft in collaboration with Asus.
    Like
    Love
    Wow
    Sad
    Angry
    562
    1 Commentarii 0 Distribuiri
  • Casa Sofia by Mário Martins Atelier: A Contemporary Urban Infill in Lagos

    Casa Sofia | © Fernando Guerra / FG+SG
    Located in the historic heart of Lagos, Portugal, Casa Sofia by Mário Martins Atelier is a thoughtful exercise in urban integration and contemporary reinterpretation. Occupying a site once held by a modest two-story house, the project is situated on the corner of a block facing the Church of St Sebastião. With its commanding presence, this national monument set a formidable challenge for the architects: introducing a new residence that respects the weight of history while offering a clear, contemporary expression.

    Casa Sofia Technical Information

    Architects1-4: Mário Martins Atelier
    Location: Lagos, Portugal
    Project Completion Years: 2023
    Photographs: © Fernando Guerra / FG+SG

    It is therefore important to design a building to fit into and complete the block. A house that is quiet and solid, with rhythmic metrics, whose new design brings an identity, with the weight and scent of the times, to a city that has existed for many centuries.
    – Mário Martins Atelier

    Casa Sofia Photographs

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG

    © Fernando Guerra / FG+SG
    Spatial Organization and Circulation
    The design’s ambition is anchored in reconciling modern residential needs with the dense urban fabric that defines the walled city. Rather than imposing a bold or disruptive form, the project embraces the existing rhythms and textures of the surrounding architecture. The result is a building that both defers to and elevates the neighborhood’s character. Its restrained profile and carefully modulated facade echo the massing and articulation of the original house while introducing an identity that is clearly of its time.
    At the core of Casa Sofia’s spatial organization is a deliberate hierarchy of spaces that transitions seamlessly between public, semi-public, and private domains. Entry from the street occurs through a modest set of steps leading to an exterior atrium. This threshold mediates the relationship between the public realm and the interior, grounding the house in its urban context. Once inside, an open hall reveals the vertical flow of the building, dominated by a staircase that appears to float, linking the house’s various levels while maintaining visual continuity throughout.
    The ground floor houses three bedrooms, each with an ensuite bathroom, radiating from the central hall. This level also contains a small basement for technical support, reinforcing the discreet layering of functional and domestic spaces. Midway up the staircase, the house opens onto a garage, a laundry room, and an intimate courtyard. These areas, essential for daily life, are seamlessly integrated into the overall composition, contributing to a spatial richness that is both pragmatic and sensorial.
    On the first floor, an open-plan arrangement accommodates the main living spaces. Around a central void, the living and dining areas, kitchen, and master suite are arranged to encourage visual interplay and shared light. This configuration enhances the spatial porosity, ensuring that despite the density of the historic center, the house retains a sense of openness and fluidity. Above, a recessed roof level recedes from the street, culminating in a panoramic terrace with a swimming pool. Here, the building dissolves into the sky, offering expansive views and light-filled leisure spaces that contrast with the more enclosed lower floors.
    Materiality and Craftsmanship
    Materiality plays a decisive role in mediating the building’s relationship with its context. White-painted plaster, a familiar element in the region, is punctuated by deep limestone moldings. These details create a play of light and shadow that emphasizes the facade’s verticality and rhythm. The generous thickness of the walls, carried over from the site’s earlier construction, lends a sense of solidity and permanence to the house, recalling the tactile traditions of the Algarve’s architecture.
    The interior and exterior detailing is characterized by an economy of means, where each material is selected for its ability to reinforce the house’s quiet presence. Local materials and craftsmanship ground the project in its immediate context while responding to environmental imperatives. High thermal comfort is achieved through careful orientation and passive design strategies, complemented by the integration of solar control and water conservation measures. These considerations underscore the project’s commitment to sustainability without resorting to superficial gestures.
    Broader Urban and Cultural Implications
    Beyond its immediate function as a family home, Casa Sofia engages in a broader dialogue with its urban and cultural surroundings. The project exemplifies a measured response to the question of how to build within a historical setting without resorting to nostalgia or pastiche. It demonstrates that contemporary architecture can find resonance within heritage contexts by prioritizing the values of continuity, scale, and material authenticity.
    In its measured dialogue with the Church of St Sebastião and the centuries-old urban landscape of Lagos, Casa Sofia illustrates the potential for architecture to enrich the experience of place through quiet, rigorous interventions. It is a project that reaffirms architecture’s capacity to negotiate between past and present, crafting spaces that are at once deeply contextual and unambiguously of their moment.
    Casa Sofia Plans

    Sketch | © Mário Martins Atelier

    Ground Level | © Mário Martins Atelier

    Level 1 | © Mário Martins Atelier

    Level 2 | © Mário Martins Atelier

    Roof Plan | © Mário Martins Atelier

    Section | © Mário Martins Atelier
    Casa Sofia Image Gallery

    About Mário Martins Atelier
    Mário Martins Atelier is a Portuguese architecture and urbanism practice founded in 2000 by architect Mário Martins, who holds a degree from the Faculty of Architecture at the Technical University of Lisbon. Headquartered in Lagos with a secondary office in Lisbon, the firm operates with a dedicated multidisciplinary team. The office has developed a broad spectrum of work, from single-family homes and collective housing to public buildings and urban regeneration, distinguished by technical precision, contextual sensitivity, and sustainable strategies.
    Credits and Additional Notes

    Lead Architect: Mário Martins, arq.
    Project Team: Rita Rocha, Sónia Fialho, Susana Caetano, Susana Jóia, Ana Graça
    Engineering: Nuno Grave Engenharia
    Building: Marques Antunes Engenharia Lda
    #casa #sofia #mário #martins #atelier
    Casa Sofia by Mário Martins Atelier: A Contemporary Urban Infill in Lagos
    Casa Sofia | © Fernando Guerra / FG+SG Located in the historic heart of Lagos, Portugal, Casa Sofia by Mário Martins Atelier is a thoughtful exercise in urban integration and contemporary reinterpretation. Occupying a site once held by a modest two-story house, the project is situated on the corner of a block facing the Church of St Sebastião. With its commanding presence, this national monument set a formidable challenge for the architects: introducing a new residence that respects the weight of history while offering a clear, contemporary expression. Casa Sofia Technical Information Architects1-4: Mário Martins Atelier Location: Lagos, Portugal Project Completion Years: 2023 Photographs: © Fernando Guerra / FG+SG It is therefore important to design a building to fit into and complete the block. A house that is quiet and solid, with rhythmic metrics, whose new design brings an identity, with the weight and scent of the times, to a city that has existed for many centuries. – Mário Martins Atelier Casa Sofia Photographs © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG Spatial Organization and Circulation The design’s ambition is anchored in reconciling modern residential needs with the dense urban fabric that defines the walled city. Rather than imposing a bold or disruptive form, the project embraces the existing rhythms and textures of the surrounding architecture. The result is a building that both defers to and elevates the neighborhood’s character. Its restrained profile and carefully modulated facade echo the massing and articulation of the original house while introducing an identity that is clearly of its time. At the core of Casa Sofia’s spatial organization is a deliberate hierarchy of spaces that transitions seamlessly between public, semi-public, and private domains. Entry from the street occurs through a modest set of steps leading to an exterior atrium. This threshold mediates the relationship between the public realm and the interior, grounding the house in its urban context. Once inside, an open hall reveals the vertical flow of the building, dominated by a staircase that appears to float, linking the house’s various levels while maintaining visual continuity throughout. The ground floor houses three bedrooms, each with an ensuite bathroom, radiating from the central hall. This level also contains a small basement for technical support, reinforcing the discreet layering of functional and domestic spaces. Midway up the staircase, the house opens onto a garage, a laundry room, and an intimate courtyard. These areas, essential for daily life, are seamlessly integrated into the overall composition, contributing to a spatial richness that is both pragmatic and sensorial. On the first floor, an open-plan arrangement accommodates the main living spaces. Around a central void, the living and dining areas, kitchen, and master suite are arranged to encourage visual interplay and shared light. This configuration enhances the spatial porosity, ensuring that despite the density of the historic center, the house retains a sense of openness and fluidity. Above, a recessed roof level recedes from the street, culminating in a panoramic terrace with a swimming pool. Here, the building dissolves into the sky, offering expansive views and light-filled leisure spaces that contrast with the more enclosed lower floors. Materiality and Craftsmanship Materiality plays a decisive role in mediating the building’s relationship with its context. White-painted plaster, a familiar element in the region, is punctuated by deep limestone moldings. These details create a play of light and shadow that emphasizes the facade’s verticality and rhythm. The generous thickness of the walls, carried over from the site’s earlier construction, lends a sense of solidity and permanence to the house, recalling the tactile traditions of the Algarve’s architecture. The interior and exterior detailing is characterized by an economy of means, where each material is selected for its ability to reinforce the house’s quiet presence. Local materials and craftsmanship ground the project in its immediate context while responding to environmental imperatives. High thermal comfort is achieved through careful orientation and passive design strategies, complemented by the integration of solar control and water conservation measures. These considerations underscore the project’s commitment to sustainability without resorting to superficial gestures. Broader Urban and Cultural Implications Beyond its immediate function as a family home, Casa Sofia engages in a broader dialogue with its urban and cultural surroundings. The project exemplifies a measured response to the question of how to build within a historical setting without resorting to nostalgia or pastiche. It demonstrates that contemporary architecture can find resonance within heritage contexts by prioritizing the values of continuity, scale, and material authenticity. In its measured dialogue with the Church of St Sebastião and the centuries-old urban landscape of Lagos, Casa Sofia illustrates the potential for architecture to enrich the experience of place through quiet, rigorous interventions. It is a project that reaffirms architecture’s capacity to negotiate between past and present, crafting spaces that are at once deeply contextual and unambiguously of their moment. Casa Sofia Plans Sketch | © Mário Martins Atelier Ground Level | © Mário Martins Atelier Level 1 | © Mário Martins Atelier Level 2 | © Mário Martins Atelier Roof Plan | © Mário Martins Atelier Section | © Mário Martins Atelier Casa Sofia Image Gallery About Mário Martins Atelier Mário Martins Atelier is a Portuguese architecture and urbanism practice founded in 2000 by architect Mário Martins, who holds a degree from the Faculty of Architecture at the Technical University of Lisbon. Headquartered in Lagos with a secondary office in Lisbon, the firm operates with a dedicated multidisciplinary team. The office has developed a broad spectrum of work, from single-family homes and collective housing to public buildings and urban regeneration, distinguished by technical precision, contextual sensitivity, and sustainable strategies. Credits and Additional Notes Lead Architect: Mário Martins, arq. Project Team: Rita Rocha, Sónia Fialho, Susana Caetano, Susana Jóia, Ana Graça Engineering: Nuno Grave Engenharia Building: Marques Antunes Engenharia Lda #casa #sofia #mário #martins #atelier
    ARCHEYES.COM
    Casa Sofia by Mário Martins Atelier: A Contemporary Urban Infill in Lagos
    Casa Sofia | © Fernando Guerra / FG+SG Located in the historic heart of Lagos, Portugal, Casa Sofia by Mário Martins Atelier is a thoughtful exercise in urban integration and contemporary reinterpretation. Occupying a site once held by a modest two-story house, the project is situated on the corner of a block facing the Church of St Sebastião. With its commanding presence, this national monument set a formidable challenge for the architects: introducing a new residence that respects the weight of history while offering a clear, contemporary expression. Casa Sofia Technical Information Architects1-4: Mário Martins Atelier Location: Lagos, Portugal Project Completion Years: 2023 Photographs: © Fernando Guerra / FG+SG It is therefore important to design a building to fit into and complete the block. A house that is quiet and solid, with rhythmic metrics, whose new design brings an identity, with the weight and scent of the times, to a city that has existed for many centuries. – Mário Martins Atelier Casa Sofia Photographs © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG © Fernando Guerra / FG+SG Spatial Organization and Circulation The design’s ambition is anchored in reconciling modern residential needs with the dense urban fabric that defines the walled city. Rather than imposing a bold or disruptive form, the project embraces the existing rhythms and textures of the surrounding architecture. The result is a building that both defers to and elevates the neighborhood’s character. Its restrained profile and carefully modulated facade echo the massing and articulation of the original house while introducing an identity that is clearly of its time. At the core of Casa Sofia’s spatial organization is a deliberate hierarchy of spaces that transitions seamlessly between public, semi-public, and private domains. Entry from the street occurs through a modest set of steps leading to an exterior atrium. This threshold mediates the relationship between the public realm and the interior, grounding the house in its urban context. Once inside, an open hall reveals the vertical flow of the building, dominated by a staircase that appears to float, linking the house’s various levels while maintaining visual continuity throughout. The ground floor houses three bedrooms, each with an ensuite bathroom, radiating from the central hall. This level also contains a small basement for technical support, reinforcing the discreet layering of functional and domestic spaces. Midway up the staircase, the house opens onto a garage, a laundry room, and an intimate courtyard. These areas, essential for daily life, are seamlessly integrated into the overall composition, contributing to a spatial richness that is both pragmatic and sensorial. On the first floor, an open-plan arrangement accommodates the main living spaces. Around a central void, the living and dining areas, kitchen, and master suite are arranged to encourage visual interplay and shared light. This configuration enhances the spatial porosity, ensuring that despite the density of the historic center, the house retains a sense of openness and fluidity. Above, a recessed roof level recedes from the street, culminating in a panoramic terrace with a swimming pool. Here, the building dissolves into the sky, offering expansive views and light-filled leisure spaces that contrast with the more enclosed lower floors. Materiality and Craftsmanship Materiality plays a decisive role in mediating the building’s relationship with its context. White-painted plaster, a familiar element in the region, is punctuated by deep limestone moldings. These details create a play of light and shadow that emphasizes the facade’s verticality and rhythm. The generous thickness of the walls, carried over from the site’s earlier construction, lends a sense of solidity and permanence to the house, recalling the tactile traditions of the Algarve’s architecture. The interior and exterior detailing is characterized by an economy of means, where each material is selected for its ability to reinforce the house’s quiet presence. Local materials and craftsmanship ground the project in its immediate context while responding to environmental imperatives. High thermal comfort is achieved through careful orientation and passive design strategies, complemented by the integration of solar control and water conservation measures. These considerations underscore the project’s commitment to sustainability without resorting to superficial gestures. Broader Urban and Cultural Implications Beyond its immediate function as a family home, Casa Sofia engages in a broader dialogue with its urban and cultural surroundings. The project exemplifies a measured response to the question of how to build within a historical setting without resorting to nostalgia or pastiche. It demonstrates that contemporary architecture can find resonance within heritage contexts by prioritizing the values of continuity, scale, and material authenticity. In its measured dialogue with the Church of St Sebastião and the centuries-old urban landscape of Lagos, Casa Sofia illustrates the potential for architecture to enrich the experience of place through quiet, rigorous interventions. It is a project that reaffirms architecture’s capacity to negotiate between past and present, crafting spaces that are at once deeply contextual and unambiguously of their moment. Casa Sofia Plans Sketch | © Mário Martins Atelier Ground Level | © Mário Martins Atelier Level 1 | © Mário Martins Atelier Level 2 | © Mário Martins Atelier Roof Plan | © Mário Martins Atelier Section | © Mário Martins Atelier Casa Sofia Image Gallery About Mário Martins Atelier Mário Martins Atelier is a Portuguese architecture and urbanism practice founded in 2000 by architect Mário Martins, who holds a degree from the Faculty of Architecture at the Technical University of Lisbon (1988). Headquartered in Lagos with a secondary office in Lisbon, the firm operates with a dedicated multidisciplinary team. The office has developed a broad spectrum of work, from single-family homes and collective housing to public buildings and urban regeneration, distinguished by technical precision, contextual sensitivity, and sustainable strategies. Credits and Additional Notes Lead Architect: Mário Martins, arq. Project Team: Rita Rocha, Sónia Fialho, Susana Caetano, Susana Jóia, Ana Graça Engineering: Nuno Grave Engenharia Building: Marques Antunes Engenharia Lda
    Like
    Love
    Wow
    Sad
    Angry
    395
    2 Commentarii 0 Distribuiri
  • NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE

    By TREVOR HOGG

    Images courtesy of Chocolate Tribe.

    Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe

    After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures.

    “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!”

    Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024.

    A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?”

    Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle.

    With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.”

    Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire.

    Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting.

    Stella Gono, Software Developer, working on the Chocolate Tribe website.

    Family photo of the Maketos. Maketo-van de Bragt has two siblings.

    Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.”

    AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!”

    Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.”

    South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024.

    There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.”

    Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years.

    Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024.

    Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.”

    Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    #nosipho #maketovan #den #bragt #altered
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.” #nosipho #maketovan #den #bragt #altered
    WWW.VFXVOICE.COM
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebe (center in black T-shirt) and friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI [nickname for Johannesburg],” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Gold (2023) for Netflix. (Image courtesy of Chocolate Tribe and Netflix) Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    Like
    Love
    Wow
    Angry
    Sad
    397
    0 Commentarii 0 Distribuiri
  • Komires: Matali Physics 6.9 Released

    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more.

    Posted by komires on Jun 3rd, 2025
    What is Matali Physics?
    Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects.
    What's new in version 6.9?

    Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others;
    Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes;
    Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.;
    Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol;
    Other improvements and fixes which complete list is available on the History webpage.

    What platforms does Matali Physics support?

    Android
    Android TV
    *BSD
    iOS
    iPadOS
    LinuxmacOS
    Steam Deck
    tvOS
    UWPWindowsWhat are the benefits of using Matali Physics?

    Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy
    Composed of dedicated modules that do not require additional licences and fees
    Supports fully dynamic and destructible scenes
    Supports physics-based behavioral animations
    Supports physical AI, object motion and state change control
    Supports physics-based GUI
    Supports physics-based particle effects
    Supports multi-scene physics simulation and scene combining
    Supports physics-based photo mode
    Supports physics-driven sound
    Supports physics-driven music
    Supports debug visualization
    Fully serializable and deserializable
    Available for all major mobile, desktop and TV platforms
    New features on request
    Dedicated technical support
    Regular updates and fixes

    If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    #komires #matali #physics #released
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illuminationin some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illuminationin some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS LinuxmacOS Steam Deck tvOS UWPWindowsWhat are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us. #komires #matali #physics #released
    WWW.INDIEDB.COM
    Komires: Matali Physics 6.9 Released
    We are pleased to announce the release of Matali Physics 6.9, the next significant step on the way to the seventh major version of the environment. Matali Physics 6.9 introduces a number of improvements and fixes to Matali Physics Core, Matali Render and Matali Games modules, presents physics-driven, completely dynamic light sources, real-time object scaling with destruction, lighting model simulating global illumination (GI) in some aspects, comprehensive support for Wayland on Linux, and more. Posted by komires on Jun 3rd, 2025 What is Matali Physics? Matali Physics is an advanced, modern, multi-platform, high-performance 3d physics environment intended for games, VR, AR, physics-based simulations and robotics. Matali Physics consists of the advanced 3d physics engine Matali Physics Core and other physics-driven modules that all together provide comprehensive simulation of physical phenomena and physics-based modeling of both real and imaginary objects. What's new in version 6.9? Physics-driven, completely dynamic light sources. The introduced solution allows for processing hundreds of movable, long-range and shadow-casting light sources, where with each source can be assigned logic that controls its behavior, changes light parameters, volumetric effects parameters and others; Real-time object scaling with destruction. All groups of physics objects and groups of physics objects with constraints may be subject to destruction process during real-time scaling, allowing group members to break off at different sizes; Lighting model simulating global illumination (GI) in some aspects. Based on own research and development work, processed in real time, ready for dynamic scenes, fast on mobile devices, not based on lightmaps, light probes, baked lights, etc.; Comprehensive support for Wayland on Linux. The latest version allows Matali Physics SDK users to create advanced, high-performance, physics-based, Vulkan-based games for modern Linux distributions where Wayland is the main display server protocol; Other improvements and fixes which complete list is available on the History webpage. What platforms does Matali Physics support? Android Android TV *BSD iOS iPadOS Linux (distributions) macOS Steam Deck tvOS UWP (Desktop, Xbox Series X/S) Windows (Classic, GDK, Handheld consoles) What are the benefits of using Matali Physics? Physics simulation, graphics, sound and music integrated into one total multimedia solution where creating complex interactions and behaviors is common and relatively easy Composed of dedicated modules that do not require additional licences and fees Supports fully dynamic and destructible scenes Supports physics-based behavioral animations Supports physical AI, object motion and state change control Supports physics-based GUI Supports physics-based particle effects Supports multi-scene physics simulation and scene combining Supports physics-based photo mode Supports physics-driven sound Supports physics-driven music Supports debug visualization Fully serializable and deserializable Available for all major mobile, desktop and TV platforms New features on request Dedicated technical support Regular updates and fixes If you have questions related to the latest version and the use of Matali Physics environment as a game creation solution, please do not hesitate to contact us.
    0 Commentarii 0 Distribuiri
  • 9 menial tasks ChatGPT can handle in seconds, saving you hours

    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it.
    What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples.

    Further reading: This tiny ChatGPT feature helps me tackle my days more productively

    Write your emails for you
    Dave Parrack / Foundry
    We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it.
    Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start.
    A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails.

    Generate itineraries and schedules
    Dave Parrack / Foundry
    If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide.
    As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me.
    As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like.

    Break down difficult concepts
    Dave Parrack / Foundry
    One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements.
    Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking!
    Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images.

    Analyze and make tough decisions
    Dave Parrack / Foundry
    We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice.
    It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic.
    It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life.

    Plan complex projects and strategies
    Dave Parrack / Foundry
    Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts.
    ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases.
    If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes.

    Compile research notes
    Dave Parrack / Foundry
    If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened.
    After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players.
    You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days.

    Summarize articles, meetings, and more
    Dave Parrack / Foundry
    There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper.
    As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link.
    This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like.

    Create Q&A flashcards for learning
    Dave Parrack / Foundry
    Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying.
    You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way.

    Provide interview practice
    Dave Parrack / Foundry
    Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively.
    Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way.
    When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager.
    Further reading: Non-gimmicky AI apps I actually use every day
    #menial #tasks #chatgpt #can #handle
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning theperfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You canalso rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases. If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reportsbased on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articlesthrough ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles.If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format, as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be, and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day #menial #tasks #chatgpt #can #handle
    WWW.PCWORLD.COM
    9 menial tasks ChatGPT can handle in seconds, saving you hours
    ChatGPT is rapidly changing the world. The process is already happening, and it’s only going to accelerate as the technology improves, as more people gain access to it, and as more learn how to use it. What’s shocking is just how many tasks ChatGPT is already capable of managing for you. While the naysayers may still look down their noses at the potential of AI assistants, I’ve been using it to handle all kinds of menial tasks for me. Here are my favorite examples. Further reading: This tiny ChatGPT feature helps me tackle my days more productively Write your emails for you Dave Parrack / Foundry We’ve all been faced with the tricky task of writing an email—whether personal or professional—but not knowing quite how to word it. ChatGPT can do the heavy lifting for you, penning the (hopefully) perfect email based on whatever information you feed it. Let’s assume the email you need to write is of a professional nature, and wording it poorly could negatively affect your career. By directing ChatGPT to write the email with a particular structure, content, and tone of voice, you can give yourself a huge head start. A winning tip for this is to never accept ChatGPT’s first attempt. Always read through it and look for areas of improvement, then request tweaks to ensure you get the best possible email. You can (and should) also rewrite the email in your own voice. Learn more about how ChatGPT coached my colleague to write better emails. Generate itineraries and schedules Dave Parrack / Foundry If you’re going on a trip but you’re the type of person who hates planning trips, then you should utilize ChatGPT’s ability to generate trip itineraries. The results can be customized to the nth degree depending on how much detail and instruction you’re willing to provide. As someone who likes to get away at least once a year but also wants to make the most of every trip, leaning on ChatGPT for an itinerary is essential for me. I’ll provide the location and the kinds of things I want to see and do, then let it handle the rest. Instead of spending days researching everything myself, ChatGPT does 80 percent of it for me. As with all of these tasks, you don’t need to accept ChatGPT’s first effort. Use different prompts to force the AI chatbot to shape the itinerary closer to what you want. You’d be surprised at how many cool ideas you’ll encounter this way—simply nix the ones you don’t like. Break down difficult concepts Dave Parrack / Foundry One of the best tasks to assign to ChatGPT is the explanation of difficult concepts. Ask ChatGPT to explain any concept you can think of and it will deliver more often than not. You can tailor the level of explanation you need, and even have it include visual elements. Let’s say, for example, that a higher-up at work regularly lectures everyone about the importance of networking. But maybe they never go into detail about what they mean, just constantly pushing the why without explaining the what. Well, just ask ChatGPT to explain networking! Okay, most of us know what “networking” is and the concept isn’t very hard to grasp. But you can do this with anything. Ask ChatGPT to explain augmented reality, multi-threaded processing, blockchain, large language models, what have you. It will provide you with a clear and simple breakdown, maybe even with analogies and images. Analyze and make tough decisions Dave Parrack / Foundry We all face tough decisions every so often. The next time you find yourself wrestling with a particularly tough one—and you just can’t decide one way or the other—try asking ChatGPT for guidance and advice. It may sound strange to trust any kind of decision to artificial intelligence, let alone an important one that has you stumped, but doing so actually makes a lot of sense. While human judgment can be clouded by emotions, AI can set that aside and prioritize logic. It should go without saying: you don’t have to accept ChatGPT’s answers. Use the AI to weigh the pros and cons, to help you understand what’s most important to you, and to suggest a direction. Who knows? If you find yourself not liking the answer given, that in itself might clarify what you actually want—and the right answer for you. This is the kind of stuff ChatGPT can do to improve your life. Plan complex projects and strategies Dave Parrack / Foundry Most jobs come with some level of project planning and management. Even I, as a freelance writer, need to plan tasks to get projects completed on time. And that’s where ChatGPT can prove invaluable, breaking projects up into smaller, more manageable parts. ChatGPT needs to know the nature of the project, the end goal, any constraints you may have, and what you have done so far. With that information, it can then break the project up with a step-by-step plan, and break it down further into phases (if required). If ChatGPT doesn’t initially split your project up in a way that suits you, try again. Change up the prompts and make the AI chatbot tune in to exactly what you’re looking for. It takes a bit of back and forth, but it can shorten your planning time from hours to mere minutes. Compile research notes Dave Parrack / Foundry If you need to research a given topic of interest, ChatGPT can save you the hassle of compiling that research. For example, ahead of a trip to Croatia, I wanted to know more about the Croatian War of Independence, so I asked ChatGPT to provide me with a brief summary of the conflict with bullet points to help me understand how it happened. After absorbing all that information, I asked ChatGPT to add a timeline of the major events, further helping me to understand how the conflict played out. ChatGPT then offered to provide me with battle maps and/or summaries, plus profiles of the main players. You can go even deeper with ChatGPT’s Deep Research feature, which is now available to free users, up to 5 Deep Research tasks per month. With Deep Research, ChatGPT conducts multi-step research to generate comprehensive reports (with citations!) based on large amounts of information across the internet. A Deep Research task can take up to 30 minutes to complete, but it’ll save you hours or even days. Summarize articles, meetings, and more Dave Parrack / Foundry There are only so many hours in the day, yet so many new articles published on the web day in and day out. When you come across extra-long reads, it can be helpful to run them through ChatGPT for a quick summary. Then, if the summary is lacking in any way, you can go back and plow through the article proper. As an example, I ran one of my own PCWorld articles (where I compared Bluesky and Threads as alternatives to X) through ChatGPT, which provided a brief summary of my points and broke down the best X alternative based on my reasons given. Interestingly, it also pulled elements from other articles. (Hmph.) If you don’t want that, you can tell ChatGPT to limit its summary to the contents of the link. This is a great trick to use for other long-form, text-heavy content that you just don’t have the time to crunch through. Think transcripts for interviews, lectures, videos, and Zoom meetings. The only caveat is to never share private details with ChatGPT, like company-specific data that’s protected by NDAs and the like. Create Q&A flashcards for learning Dave Parrack / Foundry Flashcards can be extremely useful for drilling a lot of information into your brain, such as when studying for an exam, onboarding in a new role, prepping for an interview, etc. And with ChatGPT, you no longer have to painstakingly create those flashcards yourself. All you have to do is tell the AI the details of what you’re studying. You can specify the format (such as Q&A or multiple choice), as well as various other elements. You can also choose to keep things broad or target specific sub-topics or concepts you want to focus on. You can even upload your own notes for ChatGPT to reference. You can also use Google’s NotebookLM app in a similar way. Provide interview practice Dave Parrack / Foundry Whether you’re a first-time jobseeker or have plenty of experience under your belt, it’s always a good idea to practice for your interviews when making career moves. Years ago, you might’ve had to ask a friend or family member to act as your mock interviewer. These days, ChatGPT can do it for you—and do it more effectively. Inform ChatGPT of the job title, industry, and level of position you’re interviewing for, what kind of interview it’ll be (e.g., screener, technical assessment, group/panel, one-on-one with CEO), and anything else you want it to take into consideration. ChatGPT will then conduct a mock interview with you, providing feedback along the way. When I tried this out myself, I was shocked by how capable ChatGPT can be at pretending to be a human in this context. And the feedback it provides for each answer you give is invaluable for knocking off your rough edges and improving your chances of success when you’re interviewed by a real hiring manager. Further reading: Non-gimmicky AI apps I actually use every day
    0 Commentarii 0 Distribuiri
  • Mirela Cialai Q&A: Customer Engagement Book Interview

    Reading Time: 9 minutes
    In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential.
    That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success.
    In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers.
    You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI.
    Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.

     
    Mirela Cialai Q&A Interview
    1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience?

    Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives.

    This could be revenue growth, customer retention, market expansion, or operational efficiency.
    We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition.
    We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals.
    In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance.
    This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth.
    Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings.
    Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences.
    To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale.

    By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals.

    2. What steps did you take to ensure data accuracy?
    The data team was very diligent in ensuring that our data warehouse had accurate data.
    So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc.

    That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data.

    3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy?
    Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability.
    I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%.
    This data helps make a compelling case to stakeholders about the importance of prioritizing retention.
    Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth.
    This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives.

    By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy.

    4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement?
    Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach.
    The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives.
    I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse.
    Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows.
    Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities.

    Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape.

    5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for?
    I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels.
    Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns.
    Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns.
    Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability.

    If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs.

    6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap?
    Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes.
    Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact.
    Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert.

    By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success.

    7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives?
    To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success.
    Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value.
    Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities.
    Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth.
    By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs.

    In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability.

    In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first.
    8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you?
    Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability.
    We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success.
    To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams.

    To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together.

    9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like?
    A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine.
    In one word: PAPER. Here’s how it breaks down.

    Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals.
    Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps.
    Prioritize: initiatives based on impact, feasibility, and ROI potential.
    Execute: by implementing the roadmap in manageable phases.
    Refine: by continuously improving CRM performance and refining the roadmap.

    So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy.

    10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively?
    The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences.

    The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth.

    Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies.
    The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes.
    Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution.
    A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions.
    Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others.
    While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends.

    By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success.

    11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind?
    I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives.
    Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives.

    Another important lesson: The roadmap is only as effective as the data and systems it’s built upon.

    I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on.
    A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers.

    So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.

     

     
    This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die.
    Download the PDF or request a physical copy of the book here.
    The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    #mirela #cialai #qampampa #customer #engagement
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage. #mirela #cialai #qampampa #customer #engagement
    WWW.MOENGAGE.COM
    Mirela Cialai Q&A: Customer Engagement Book Interview
    Reading Time: 9 minutes In the ever-evolving landscape of customer engagement, staying ahead of the curve is not just advantageous, it’s essential. That’s why, for Chapter 7 of “The Customer Engagement Book: Adapt or Die,” we sat down with Mirela Cialai, a seasoned expert in CRM and Martech strategies at brands like Equinox. Mirela brings a wealth of knowledge in aligning technology roadmaps with business goals, shifting organizational focuses from acquisition to retention, and leveraging hyper-personalization to drive success. In this interview, Mirela dives deep into building robust customer engagement technology roadmaps. She unveils the “PAPER” framework—Plan, Audit, Prioritize, Execute, Refine—a simple yet effective strategy for marketers. You’ll gain insights into identifying gaps in your Martech stack, ensuring data accuracy, and prioritizing initiatives that deliver the greatest impact and ROI. Whether you’re navigating data silos, striving for cross-functional alignment, or aiming for seamless tech integration, Mirela’s expertise provides practical solutions and actionable takeaways.   Mirela Cialai Q&A Interview 1. How do you define the vision for a customer engagement platform roadmap in alignment with the broader business goals? Can you share any examples of successful visions from your experience? Defining the vision for the roadmap in alignment with the broader business goals involves creating a strategic framework that connects the team’s objectives with the organization’s overarching mission or primary objectives. This could be revenue growth, customer retention, market expansion, or operational efficiency. We then break down these goals into actionable areas where the team can contribute, such as improving engagement, increasing lifetime value, or driving acquisition. We articulate how the team will support business goals by defining the KPIs that link CRM outcomes — the team’s outcomes — to business goals. In a previous role, the CRM team I was leading faced significant challenges due to the lack of attribution capabilities and a reliance on surface-level metrics such as open rates and click-through rates to measure performance. This approach made it difficult to quantify the impact of our efforts on broader business objectives such as revenue growth. Recognizing this gap, I worked on defining a vision for the CRM team to address these shortcomings. Our vision was to drive measurable growth through enhanced data accuracy and improved attribution capabilities, which allowed us to deliver targeted, data-driven, and personalized customer experiences. To bring this vision to life, I developed a roadmap that focused on first improving data accuracy, building our attribution capabilities, and delivering personalization at scale. By aligning the vision with these strategic priorities, we were able to demonstrate the tangible impact of our efforts on the key business goals. 2. What steps did you take to ensure data accuracy? The data team was very diligent in ensuring that our data warehouse had accurate data. So taking that as the source of truth, we started cleaning the data in all the other platforms that were integrated with our data warehouse — our CRM platform, our attribution analytics platform, etc. That’s where we started, looking at all the different integrations and ensuring that the data flows were correct and that we had all the right flows in place. And also validating and cleaning our email database — that helped, having more accurate data. 3. How do you recommend shifting organizational focus from acquisition to retention within a customer engagement strategy? Shifting an organization’s focus from acquisition to retention requires a cultural and strategic shift, emphasizing the immense value that existing customers bring to long-term growth and profitability. I would start by quantifying the value of retention, showcasing how retaining customers is significantly more cost-effective than acquiring new ones. Research consistently shows that increasing retention rates by just 5% can boost profits by at least 25 to 95%. This data helps make a compelling case to stakeholders about the importance of prioritizing retention. Next, I would link retention to core business goals by demonstrating how enhancing customer lifetime value and loyalty can directly drive revenue growth. This involves shifting the organization’s focus to retention-specific metrics such as churn rate, repeat purchase rate, and customer LTV. These metrics provide actionable insights into customer behaviors and highlight the financial impact of retention initiatives, ensuring alignment with the broader company objectives. By framing retention as a driver of sustainable growth, the organization can see it not as a competing priority, but as a complementary strategy to acquisition, ultimately leading to a more balanced and effective customer engagement strategy. 4. What are the key steps in analyzing a brand’s current Martech stack capabilities to identify gaps and opportunities for improvement? Developing a clear understanding of the Martech stack’s current state and ensuring it aligns with a brand’s strategic needs and future goals requires a structured and strategic approach. The process begins with defining what success looks like in terms of technology capabilities such as scalability, integration, automation, and data accessibility, and linking these capabilities directly to the brand’s broader business objectives. I start by doing an inventory of all tools currently in use, including their purpose, owner, and key functionalities, assessing if these tools are being used to their full potential or if there are features that remain unused, and reviewing how well tools integrate with one another and with our core systems, the data warehouse. Also, comparing the capabilities of each tool and results against industry standards and competitor practices and looking for missing functionalities such as personalization, omnichannel orchestration, or advanced analytics, and identifying overlapping tools that could be consolidated to save costs and streamline workflows. Finally, review the costs of the current tools against their impact on business outcomes and identify technologies that could reduce costs, increase efficiency, or deliver higher ROI through enhanced capabilities. Establish a regular review cycle for the Martech stack to ensure it evolves alongside the business and the technological landscape. 5. How do you evaluate whether a company’s tech stack can support innovative customer-focused campaigns, and what red flags should marketers look out for? I recommend taking a structured approach and first ensure there is seamless integration across all tools to support a unified customer view and data sharing across the different channels. Determine if the stack can handle increasing data volumes, larger audiences, and additional channels as the campaigns grow, and check if it supports dynamic content, behavior-based triggers, and advanced segmentation and can process and act on data in real time through emerging technologies like AI/ML predictive analytics to enable marketers to launch responsive and timely campaigns. Most importantly, we need to ensure that the stack offers robust reporting tools that provide actionable insights, allowing teams to track performance and optimize campaigns. Some of the red flags are: data silos where customer data is fragmented across platforms and not easily accessible or integrated, inability to process or respond to customer behavior in real time, a reliance on manual intervention for tasks like segmentation, data extraction, campaign deployment, and poor scalability. If the stack struggles with growing data volumes or expanding to new channels, it won’t support the company’s evolving needs. 6. What role do hyper-personalization and timely communication play in a successful customer engagement strategy? How do you ensure they’re built into the technology roadmap? Hyper-personalization and timely communication are essential components of a successful customer engagement strategy because they create meaningful, relevant, and impactful experiences that deepen the relationship with customers, enhance loyalty, and drive business outcomes. Hyper-personalization leverages data to deliver tailored content that resonates with each individual based on their preferences, behavior, or past interactions, and timely communication ensures these personalized interactions occur at the most relevant moments, which ultimately increases their impact. Customers are more likely to engage with messages that feel relevant and align with their needs, and real-time triggers such as cart abandonment or post-purchase upsells capitalize on moments when customers are most likely to convert. By embedding these capabilities into the roadmap through data integration, AI-driven insights, automation, and continuous optimization, we can deliver impactful, relevant, and timely experiences that foster deeper customer relationships and drive long-term success. 7. What’s your approach to breaking down the customer engagement technology roadmap into manageable phases? How do you prioritize the initiatives? To create a manageable roadmap, we need to divide it into distinct phases, starting with building the foundation by addressing data cleanup, system integrations, and establishing metrics, which lays the groundwork for success. Next, we can focus on early wins and quick impact by launching behavior-based campaigns, automating workflows, and improving personalization to drive immediate value. Then we can move to optimization and expansion, incorporating predictive analytics, cross-channel orchestration, and refined attribution models to enhance our capabilities. Finally, prioritize innovation and scalability, leveraging AI/ML for hyper-personalization, scaling campaigns to new markets, and ensuring the system is equipped for future growth. By starting with foundational projects, delivering quick wins, and building towards scalable innovation, we can drive measurable outcomes while maintaining our agility to adapt to evolving needs. In terms of prioritizing initiatives effectively, I would focus on projects that deliver the greatest impact on business goals, on customer experience and ROI, while we consider feasibility, urgency, and resource availability. In the past, I’ve used frameworks like Impact Effort Matrix to identify the high-impact, low-effort initiatives and ensure that the most critical projects are addressed first. 8. How do you ensure cross-functional alignment around this roadmap? What processes have worked best for you? Ensuring cross-functional alignment requires clear communication, collaborative planning, and shared accountability. We need to establish a shared understanding of the roadmap’s purpose and how it ties to the company’s overall goals by clearly articulating the “why” behind the roadmap and how each team can contribute to its success. To foster buy-in and ensure the roadmap reflects diverse perspectives and needs, we need to involve all stakeholders early on during the roadmap development and clearly outline each team’s role in executing the roadmap to ensure accountability across the different teams. To keep teams informed and aligned, we use meetings such as roadmap kickoff sessions and regular check-ins to share updates, address challenges collaboratively, and celebrate milestones together. 9. If you were to outline a simple framework for marketers to follow when building a customer engagement technology roadmap, what would it look like? A simple framework for marketers to follow when building the roadmap can be summarized in five clear steps: Plan, Audit, Prioritize, Execute, and Refine. In one word: PAPER. Here’s how it breaks down. Plan: We lay the groundwork for the roadmap by defining the CRM strategy and aligning it with the business goals. Audit: We evaluate the current state of our CRM capabilities. We conduct a comprehensive assessment of our tools, our data, the processes, and team workflows to identify any potential gaps. Prioritize: initiatives based on impact, feasibility, and ROI potential. Execute: by implementing the roadmap in manageable phases. Refine: by continuously improving CRM performance and refining the roadmap. So the PAPER framework — Plan, Audit, Prioritize, Execute, and Refine — provides a structured, iterative approach allowing marketers to create a scalable and impactful customer engagement strategy. 10. What are the most common challenges marketers face in creating or executing a customer engagement strategy, and how can they address these effectively? The most critical is when the customer data is siloed across different tools and platforms, making it very difficult to get a unified view of the customer. This limits the ability to deliver personalized and consistent experiences. The solution is to invest in tools that can centralize data from all touchpoints and ensure seamless integration between different platforms to create a single source of truth. Another challenge is the lack of clear metrics and ROI measurement and the inability to connect engagement efforts to tangible business outcomes, making it very hard to justify investment or optimize strategies. The solution for that is to define clear KPIs at the outset and use attribution models to link customer interactions to revenue and other key outcomes. Overcoming internal silos is another challenge where there is misalignment between teams, which can lead to inconsistent messaging and delayed execution. A solution to this is to foster cross-functional collaboration through shared goals, regular communication, and joint planning sessions. Besides these, other challenges marketers can face are delivering personalization at scale, keeping up with changing customer expectations, resource and budget constraints, resistance to change, and others. While creating and executing a customer engagement strategy can be challenging, these obstacles can be addressed through strategic planning, leveraging the right tools, fostering collaboration, and staying adaptable to customer needs and industry trends. By tackling these challenges proactively, marketers can deliver impactful customer-centric strategies that drive long-term success. 11. What are the top takeaways or lessons that you’ve learned from building customer engagement technology roadmaps that others should keep in mind? I would say one of the most important takeaways is to ensure that the roadmap directly supports the company’s broader objectives. Whether the focus is on retention, customer lifetime value, or revenue growth, the roadmap must bridge the gap between high-level business goals and actionable initiatives. Another important lesson: The roadmap is only as effective as the data and systems it’s built upon. I’ve learned the importance of prioritizing foundational elements like data cleanup, integrations, and governance before tackling advanced initiatives like personalization or predictive analytics. Skipping this step can lead to inefficiencies or missed opportunities later on. A Customer Engagement Roadmap is a strategic tool that evolves alongside the business and its customers. So by aligning with business goals, building a solid foundation, focusing on impact, fostering collaboration, and remaining adaptable, you can create a roadmap that delivers measurable results and meaningful customer experiences.     This interview Q&A was hosted with Mirela Cialai, Director of CRM & MarTech at Equinox, for Chapter 7 of The Customer Engagement Book: Adapt or Die. Download the PDF or request a physical copy of the book here. The post Mirela Cialai Q&A: Customer Engagement Book Interview appeared first on MoEngage.
    0 Commentarii 0 Distribuiri
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Commentarii 0 Distribuiri