• Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA

    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs.
    Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online.
    At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees.
    3D Digital Twins and AI Transform Marketing, Advertising and Product Design
    The meeting of generative AI and 3D product digital twins results in unlimited creative potential.
    Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels.
    The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch.
    Image courtesy of Nestlé
    The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure.
    Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands.
    LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy.
    The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale.
    The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation.
    Image courtesy of Grip
    L’Oréal Gives Marketing and Online Shopping an AI Makeover
    Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI.
    L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines.
    “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.”
    CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences.
    The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates.

    Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products.
    Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare.
    “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.” 

    The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure.
    Rapid Innovation With the NVIDIA Partner Ecosystem
    NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI.
    Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference.
    AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need.
    The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale.
    Physical AI Brings Acceleration to Supply Chain and Logistics
    AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%.
    Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments.
    Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers.
    From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations.
    Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16.
    Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    #retail #reboot #major #global #brands
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goodsindustries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions. #retail #reboot #major #global #brands
    BLOGS.NVIDIA.COM
    Retail Reboot: Major Global Brands Transform End-to-End Operations With NVIDIA
    AI is packing and shipping efficiency for the retail and consumer packaged goods (CPG) industries, with a majority of surveyed companies in the space reporting the technology is increasing revenue and reducing operational costs. Global brands are reimagining every facet of their businesses with AI, from how products are designed and manufactured to how they’re marketed, shipped and experienced in-store and online. At NVIDIA GTC Paris at VivaTech, industry leaders including L’Oréal, LVMH and Nestlé shared how they’re using tools like AI agents and physical AI — powered by NVIDIA AI and simulation technologies — across every step of the product lifecycle to enhance operations and experiences for partners, customers and employees. 3D Digital Twins and AI Transform Marketing, Advertising and Product Design The meeting of generative AI and 3D product digital twins results in unlimited creative potential. Nestlé, the world’s largest food and beverage company, today announced a collaboration with NVIDIA and Accenture to launch a new, AI-powered in-house service that will create high-quality product content at scale for e-commerce and digital media channels. The new content service, based on digital twins powered by the NVIDIA Omniverse platform, creates exact 3D virtual replicas of physical products. Product packaging can be adjusted or localized digitally, enabling seamless integration into various environments, such as seasonal campaigns or channel-specific formats. This means that new creative content can be generated without having to constantly reshoot from scratch. Image courtesy of Nestlé The service is developed in partnership with Accenture Song, using Accenture AI Refinery built on NVIDIA Omniverse for advanced digital twin creation. It uses NVIDIA AI Enterprise for generative AI, hosted on Microsoft Azure for robust cloud infrastructure. Nestlé already has a baseline of 4,000 3D digital products — mainly for global brands — with the ambition to convert a total of 10,000 products into digital twins in the next two years across global and local brands. LVMH, the world’s leading luxury goods company, home to 75 distinguished maisons, is bringing 3D digital twins to its content production processes through its wine and spirits division, Moët Hennessy. The group partnered with content configuration engine Grip to develop a solution using the NVIDIA Omniverse platform, which enables the creation of 3D digital twins that power content variation production. With Grip’s solution, Moët Hennessy teams can quickly generate digital marketing assets and experiences to promote luxury products at scale. The initiative, led by Capucine Lafarge and Chloé Fournier, has been recognized by LVMH as a leading approach to scaling content creation. Image courtesy of Grip L’Oréal Gives Marketing and Online Shopping an AI Makeover Innovation starts at the drawing board. Today, that board is digital — and it’s powered by AI. L’Oréal Groupe, the world’s leading beauty player, announced its collaboration with NVIDIA today. Through this collaboration, L’Oréal and its partner ecosystem will leverage the NVIDIA AI Enterprise platform to transform its consumer beauty experiences, marketing and advertising content pipelines. “AI doesn’t think with the same constraints as a human being. That opens new avenues for creativity,” said Anne Machet, global head of content and entertainment at L’Oréal. “Generative AI enables our teams and partner agencies to explore creative possibilities.” CreAItech, L’Oréal’s generative AI content platform, is augmenting the creativity of marketing and content teams. Combining a modular ecosystem of models, expertise, technologies and partners — including NVIDIA — CreAltech empowers marketers to generate thousands of unique, on-brand images, videos and lines of text for diverse platforms and global audiences. The solution empowers L’Oréal’s marketing teams to quickly iterate on campaigns that improve consumer engagement across social media, e-commerce content and influencer marketing — driving higher conversion rates. Noli.com, the first AI-powered multi-brand marketplace startup founded and backed by the  L’Oréal Groupe, is reinventing how people discover and shop for beauty products. Noli’s AI Beauty Matchmaker experience uses L’Oréal Groupe’s century-long expertise in beauty, including its extensive knowledge of beauty science, beauty tech and consumer insights, built from over 1 million skin data points and analysis of thousands of product formulations. It gives users a BeautyDNA profile with expert-level guidance and personalized product recommendations for skincare and haircare. “Beauty shoppers are often overwhelmed by choice and struggling to find the products that are right for them,” said Amos Susskind, founder and CEO of Noli. “By applying the latest AI models accelerated by NVIDIA and Accenture to the unparalleled knowledge base and expertise of the L’Oréal Groupe, we can provide hyper-personalized, explainable recommendations to our users.”  https://blogs.nvidia.com/wp-content/uploads/2025/06/Noli_Demo.mp4 The Accenture AI Refinery, powered by NVIDIA AI Enterprise, will provide the platform for Noli to experiment and scale. Noli’s new agent models will use NVIDIA NIM and NVIDIA NeMo microservices, including NeMo Retriever, running on Microsoft Azure. Rapid Innovation With the NVIDIA Partner Ecosystem NVIDIA’s ecosystem of solution provider partners empowers retail and CPG companies to innovate faster, personalize customer experiences, and optimize operations with NVIDIA accelerated computing and AI. Global digital agency Monks is reshaping the landscape of AI-driven marketing, creative production and enterprise transformation. At the heart of their innovation lies the Monks.Flow platform that enhances both the speed and sophistication of creative workflows through NVIDIA Omniverse, NVIDIA NIM microservices and Triton Inference Server for lightning-fast inference. AI image solutions provider Bria is helping retail giants like Lidl and L’Oreal to enhance marketing asset creation. Bria AI transforms static product images into compelling, dynamic advertisements that can be quickly scaled for use across any marketing need. The company’s generative AI platform uses NVIDIA Triton Inference Server software and the NVIDIA TensorRT software development kit for accelerated inference, as well as NVIDIA NIM and NeMo microservices for quick image generation at scale. Physical AI Brings Acceleration to Supply Chain and Logistics AI’s impact extends far beyond the digital world. Physical AI-powered warehousing robots, for example, are helping maximize efficiency in retail supply chain operations. Four in five retail companies have reported that AI has helped reduce supply chain operational costs, with 25% reporting cost reductions of at least 10%. Technology providers Lyric, KoiReader Technologies and Exotec are tackling the challenges of integrating AI into complex warehouse environments. Lyric is using the NVIDIA cuOpt GPU-accelerated solver for warehouse network planning and route optimization, and is collaborating with NVIDIA to apply the technology to broader supply chain decision-making problems. KoiReader Technologies is tapping the NVIDIA Metropolis stack for its computer vision solutions within logistics, supply chain and manufacturing environments using the KoiVision Platform. And Exotec is using NVIDIA CUDA libraries and the NVIDIA JetPack software development kit for embedded robotic systems in warehouse and distribution centers. From real-time robotics orchestration to predictive maintenance, these solutions are delivering impact on uptime, throughput and cost savings for supply chain operations. Learn more by joining a follow-up discussion on digital twins and AI-powered creativity with Microsoft, Nestlé, Accenture and NVIDIA at Cannes Lions on Monday, June 16. Watch the NVIDIA GTC Paris keynote from NVIDIA founder and CEO Jensen Huang at VivaTech, and explore GTC Paris sessions.
    Like
    Love
    Sad
    Wow
    Angry
    23
    0 Yorumlar 0 hisse senetleri
  • In a world where dreams fade like whispers in the wind, I find myself grappling with a sense of profound solitude. The announcement of the Prince of Persia: The Sands of Time remake in 2020 felt like a beacon of hope, a reminder that nostalgia could be revived and cherished once more. Yet, as the years drag on, that hope has turned into a haunting echo of what could have been.

    Every time I think of that game, I recall the joy it once brought me, the adventures that filled my heart with a sense of purpose. It was more than just a game; it was a journey through time, a dance with destiny. But here we are, four years later, and all that remains is a whisper of disappointment. Ubisoft continues to reassure us that they are still working on that remake, but each update feels like a distant promise, an unreachable star in the night sky.

    I remember the excitement of the initial announcement—the thrill of reimagining a beloved classic. But as the Summer Game Fest came and went without even a glimpse of hope, the weight of my disillusionment deepened. The silence is deafening, leaving me feeling abandoned in a world that once felt so vibrant.

    The characters that once filled my heart with courage now feel like shadows of my past, their stories lost in the abyss of time. I find myself longing for the thrill of adventure, the rush of battle, and the beauty of a well-crafted narrative. Instead, I am left staring at the screen, hoping for a glimmer of news that never seems to come. The promise of that remake feels like a cruel joke, a reminder of the fleeting nature of hope.

    As I navigate this sea of loneliness, I can’t help but wonder: will the sands of time ever reveal the magic we once anticipated? Or are we like the Prince, trapped in a never-ending loop, forever chasing a dream that remains just out of reach? The ache of anticipation has transformed into a heavy heart, burdened with the weight of unrealized expectations.

    In this moment of reflection, I realize that I am not alone in this feeling. Many of us are waiting, hoping for something that may never arrive. It’s a shared grief, a collective yearning for the joy that once was. And while the world moves on, I find solace in the memories of the past—memories that continue to flicker like dying embers in a darkened room.

    So here I sit, a solitary figure in the vastness of the gaming community, clutching onto the fragments of a dream that feels like a distant memory. Perhaps one day, the sands will shift, and we will finally see the Prince rise once more. Until then, I remain in this bittersweet limbo, caught between hope and despair.

    #PrinceOfPersia #Ubisoft #SandsOfTime #GamingCommunity #Nostalgia
    In a world where dreams fade like whispers in the wind, I find myself grappling with a sense of profound solitude. The announcement of the Prince of Persia: The Sands of Time remake in 2020 felt like a beacon of hope, a reminder that nostalgia could be revived and cherished once more. Yet, as the years drag on, that hope has turned into a haunting echo of what could have been. Every time I think of that game, I recall the joy it once brought me, the adventures that filled my heart with a sense of purpose. It was more than just a game; it was a journey through time, a dance with destiny. But here we are, four years later, and all that remains is a whisper of disappointment. Ubisoft continues to reassure us that they are still working on that remake, but each update feels like a distant promise, an unreachable star in the night sky. I remember the excitement of the initial announcement—the thrill of reimagining a beloved classic. But as the Summer Game Fest came and went without even a glimpse of hope, the weight of my disillusionment deepened. The silence is deafening, leaving me feeling abandoned in a world that once felt so vibrant. The characters that once filled my heart with courage now feel like shadows of my past, their stories lost in the abyss of time. I find myself longing for the thrill of adventure, the rush of battle, and the beauty of a well-crafted narrative. Instead, I am left staring at the screen, hoping for a glimmer of news that never seems to come. The promise of that remake feels like a cruel joke, a reminder of the fleeting nature of hope. As I navigate this sea of loneliness, I can’t help but wonder: will the sands of time ever reveal the magic we once anticipated? Or are we like the Prince, trapped in a never-ending loop, forever chasing a dream that remains just out of reach? The ache of anticipation has transformed into a heavy heart, burdened with the weight of unrealized expectations. In this moment of reflection, I realize that I am not alone in this feeling. Many of us are waiting, hoping for something that may never arrive. It’s a shared grief, a collective yearning for the joy that once was. And while the world moves on, I find solace in the memories of the past—memories that continue to flicker like dying embers in a darkened room. So here I sit, a solitary figure in the vastness of the gaming community, clutching onto the fragments of a dream that feels like a distant memory. Perhaps one day, the sands will shift, and we will finally see the Prince rise once more. Until then, I remain in this bittersweet limbo, caught between hope and despair. #PrinceOfPersia #Ubisoft #SandsOfTime #GamingCommunity #Nostalgia
    Ubisoft Still Working On That Prince Of Persia Remake That Was Announced In 2020
    Remember that Prince of Persia: The Sands of Time remake Ubisoft announced back in 2020? The one that was supposed to launch four years ago? Well, it’s still in development according to a new update from the publisher after the game was a no-show dur
    Like
    Love
    Wow
    Angry
    Sad
    213
    1 Yorumlar 0 hisse senetleri
  • The 25 creative studios inspiring us the most in 2025

    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society.
    From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before.
    In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now.
    1. Porto Rocha
    Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design.
    For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence.
    As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with!

    2. DixonBaxi
    Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation.
    They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation.
    And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us."

    3. Mother
    Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging."
    4. Studio Dumbar/DEPT®
    Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum.
    In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition".
    5. HONDO
    Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products.
    This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship.

    6. Smith & Diction
    Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding.
    Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website.
    Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional.

    7. DNCO
    DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London.
    Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York."
    DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character.

    8. Hey Studio
    Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose.
    A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community.
    As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face.

    9. Koto
    Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges.
    Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets.
    Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here.

    10. Robot Food
    Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design.
    Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics.
    The results are visually striking, with black backgrounds prominently featured, punctuated by vibrant fruit illustrations and flavour-coded colours. about the project here.

    11. Saffron Brand Consultants
    Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands.
    One 2025 highlight so far has been their work for Saudi National Bankto create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia.
    Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions.
    12. Alright Studio
    Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling.
    Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content.
    13. Wolff Olins
    Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally.
    A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean.
    Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself.

    14. COLLINS
    Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark.
    The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it.
    Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow.
    15. Studio Spass
    Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair.
    Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!"

    16. Applied Design Works
    Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients.
    We were thrilled by their recent work for Grand Central Madison, where they were instrumental in ushering in a new era for the transportation hub.
    Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it.

    17. The Chase
    The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio."
    Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered.

    18. A Practice for Everyday Life
    Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original.
    Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠

    A Practice for Everyday Life. Photo: Carol Sachs

    Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park

    La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank

    CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković

    19. Studio Nari
    Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe."
    One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community.
    The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time.
    20. Beetroot Design Group
    Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events.
    The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation".
    21. Kind Studio
    Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs.
    One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message.

    22. Slug Global
    Slug Global is a creative agency and art collective founded by artist and musician Bosco. Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha.
    One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women.

    23. Little Troop
    New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids.
    One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards.
    Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun.

    24. Morcos Key
    Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression.
    One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content.
    25. Thirst
    Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry.
    To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel.
    #creative #studios #inspiring #most
    The 25 creative studios inspiring us the most in 2025
    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society. From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before. In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now. 1. Porto Rocha Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design. For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence. As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with! 2. DixonBaxi Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation. They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation. And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us." 3. Mother Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging." 4. Studio Dumbar/DEPT® Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum. In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition". 5. HONDO Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products. This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship. 6. Smith & Diction Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding. Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website. Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional. 7. DNCO DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London. Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York." DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character. 8. Hey Studio Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose. A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community. As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face. 9. Koto Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges. Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets. Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here. 10. Robot Food Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design. Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics. The results are visually striking, with black backgrounds prominently featured, punctuated by vibrant fruit illustrations and flavour-coded colours. about the project here. 11. Saffron Brand Consultants Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands. One 2025 highlight so far has been their work for Saudi National Bankto create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia. Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions. 12. Alright Studio Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling. Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content. 13. Wolff Olins Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally. A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean. Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself. 14. COLLINS Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark. The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it. Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow. 15. Studio Spass Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair. Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!" 16. Applied Design Works Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients. We were thrilled by their recent work for Grand Central Madison, where they were instrumental in ushering in a new era for the transportation hub. Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it. 17. The Chase The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio." Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered. 18. A Practice for Everyday Life Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original. Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠ A Practice for Everyday Life. Photo: Carol Sachs Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković 19. Studio Nari Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe." One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community. The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time. 20. Beetroot Design Group Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events. The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation". 21. Kind Studio Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs. One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message. 22. Slug Global Slug Global is a creative agency and art collective founded by artist and musician Bosco. Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha. One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women. 23. Little Troop New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids. One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards. Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun. 24. Morcos Key Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression. One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content. 25. Thirst Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry. To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel. #creative #studios #inspiring #most
    WWW.CREATIVEBOOM.COM
    The 25 creative studios inspiring us the most in 2025
    Which creative studio do you most admire right now, and why? This is a question we asked our community via an ongoing survey. With more than 700 responses so far, these are the top winners. What's striking about this year's results is the popularity of studios that aren't just producing beautiful work but are also actively shaping discussions and tackling the big challenges facing our industry and society. From the vibrant energy of Brazilian culture to the thoughtful minimalism of North European aesthetics, this list reflects a global creative landscape that's more connected, more conscious, and more collaborative than ever before. In short, these studios aren't just following trends; they're setting them. Read on to discover the 25 studios our community is most excited about right now. 1. Porto Rocha Porto Rocha is a New York-based agency that unites strategy and design to create work that evolves with the world we live in. It continues to dominate conversations in 2025, and it's easy to see why. Founders Felipe Rocha and Leo Porto have built something truly special—a studio that not only creates visually stunning work but also actively celebrates and amplifies diverse voices in design. For instance, their recent bold new identity for the São Paulo art museum MASP nods to Brazilian modernist design traditions while reimagining them for a contemporary audience. The rebrand draws heavily on the museum's iconic modernist architecture by Lina Bo Bardi, using a red-and-black colour palette and strong typography to reflect the building's striking visual presence. As we write this article, Porto Rocha just shared a new partnership with Google to reimagine the visual and verbal identity of its revolutionary Gemini AI model. We can't wait to see what they come up with! 2. DixonBaxi Simon Dixon and Aporva Baxi's London powerhouse specialises in creating brand strategies and design systems for "brave businesses" that want to challenge convention, including Hulu, Audible, and the Premier League. The studio had an exceptional start to 2025 by collaborating with Roblox on a brand new design system. At the heart of this major project is the Tilt: a 15-degree shift embedded in the logo that signals momentum, creativity, and anticipation. They've also continued to build their reputation as design thought leaders. At the OFFF Festival 2025, for instance, Simon and Aporva delivered a masterclass on running a successful brand design agency. Their core message centred on the importance of people and designing with intention, even in the face of global challenges. They also highlighted "Super Futures," their program that encourages employees to think freely and positively about brand challenges and audience desires, aiming to reclaim creative liberation. And if that wasn't enough, DixonBaxi has just launched its brand new website, one that's designed to be open in nature. As Simon explains: "It's not a shop window. It's a space to share the thinking and ethos that drive us. You'll find our work, but more importantly, what shapes it. No guff. Just us." 3. Mother Mother is a renowned independent creative agency founded in London and now boasts offices in New York and Los Angeles as well. They've spent 2025 continuing to push the boundaries of what advertising can achieve. And they've made an especially big splash with their latest instalment of KFC's 'Believe' campaign, featuring a surreal and humorous take on KFC's gravy. As we wrote at the time: "Its balance between theatrical grandeur and self-awareness makes the campaign uniquely engaging." 4. Studio Dumbar/DEPT® Based in Rotterdam, Studio Dumbar/DEPT® is widely recognised for its influential work in visual branding and identity, often incorporating creative coding and sound, for clients such as the Dutch Railways, Instagram, and the Van Gogh Museum. In 2025, we've especially admired their work for the Dutch football club Feyenoord, which brings the team under a single, cohesive vision that reflects its energy and prowess. This groundbreaking rebrand, unveiled at the start of May, moves away from nostalgia, instead emphasising the club's "measured ferocity, confidence, and ambition". 5. HONDO Based between Palma de Mallorca, Spain and London, HONDO specialises in branding, editorial, typography and product design. We're particular fans of their rebranding of metal furniture makers Castil, based around clean and versatile designs that highlight Castil's vibrant and customisable products. This new system features a bespoke monospaced typeface and logo design that evokes Castil's adaptability and the precision of its craftsmanship. 6. Smith & Diction Smith & Diction is a small but mighty design and copy studio founded by Mike and Chara Smith in Philadelphia. Born from dreams, late-night chats, and plenty of mistakes, the studio has grown into a creative force known for thoughtful, boundary-pushing branding. Starting out with Mike designing in a tiny apartment while Chara held down a day job, the pair learned the ropes the hard way—and now they're thriving. Recent highlights include their work with Gamma, an AI platform that lets you quickly get ideas out of your head and into a presentation deck or onto a website. Gamma wanted their brand update to feel "VERY fun and a little bit out there" with an AI-first approach. So Smith & Diction worked hard to "put weird to the test" while still developing responsible systems for logo, type and colour. The results, as ever, were exceptional. 7. DNCO DNCO is a London and New York-based creative studio specialising in place branding. They are best known for shaping identities, digital tools, and wayfinding for museums, cultural institutions, and entire neighbourhoods, with clients including the Design Museum, V&A and Transport for London. Recently, DNCO has been making headlines again with its ambitious brand refresh for Dumbo, a New York neighbourhood struggling with misperceptions due to mass tourism. The goal was to highlight Dumbo's unconventional spirit and demonstrate it as "a different side of New York." DNCO preserved the original diagonal logo and introduced a flexible "tape graphic" system, inspired by the neighbourhood's history of inventing the cardboard box, to reflect its ingenuity and reveal new perspectives. The colour palette and typography were chosen to embody Dumbo's industrial and gritty character. 8. Hey Studio Founded by Verònica Fuerte in Barcelona, Spain, Hey Studio is a small, all-female design agency celebrated for its striking use of geometry, bold colour, and playful yet refined visual language. With a focus on branding, illustration, editorial design, and typography, they combine joy with craft to explore issues with heart and purpose. A great example of their impact is their recent branding for Rainbow Wool. This German initiative is transforming wool from gay rams into fashion products to support the LGBT community. As is typical for Hey Studio, the project's identity is vibrant and joyful, utilising bright, curved shapes that will put a smile on everyone's face. 9. Koto Koto is a London-based global branding and digital studio known for co-creation, strategic thinking, expressive design systems, and enduring partnerships. They're well-known in the industry for bringing warmth, optimism and clarity to complex brand challenges. Over the past 18 months, they've undertaken a significant project to refresh Amazon's global brand identity. This extensive undertaking has involved redesigning Amazon's master brand and over 50 of its sub-brands across 15 global markets. Koto's approach, described as "radical coherence", aims to refine and modernize Amazon's most recognizable elements rather than drastically changing them. You can read more about the project here. 10. Robot Food Robot Food is a Leeds-based, brand-first creative studio recognised for its strategic and holistic approach. They're past masters at melding creative ideas with commercial rigour across packaging, brand strategy and campaign design. Recent Robot Food projects have included a bold rebrand for Hip Pop, a soft drinks company specializing in kombucha and alternative sodas. Their goal was to elevate Hip Pop from an indie challenger to a mainstream category leader, moving away from typical health drink aesthetics. The results are visually striking, with black backgrounds prominently featured (a rarity in the health drink aisle), punctuated by vibrant fruit illustrations and flavour-coded colours. Read more about the project here. 11. Saffron Brand Consultants Saffron is an independent global consultancy with offices in London, Madrid, Vienna and Istanbul. With deep expertise in naming, strategy, identity, and design systems, they work with leading public and private-sector clients to develop confident, culturally intelligent brands. One 2025 highlight so far has been their work for Saudi National Bank (SNB) to create NEO, a groundbreaking digital lifestyle bank in Saudi Arabia. Saffron integrated cultural and design trends, including Saudi neo-futurism, for its sonic identity to create a product that supports both individual and community connections. The design system strikes a balance between modern Saudi aesthetics and the practical demands of a fast-paced digital product, ensuring a consistent brand reflection across all interactions. 12. Alright Studio Alright Studio is a full-service strategy, creative, production and technology agency based in Brooklyn, New York. It prides itself on a "no house style" approach for clients, including A24, Meta Platforms, and Post Malone. One of the most exciting of their recent projects has been Offball, a digital-first sports news platform that aims to provide more nuanced, positive sports storytelling. Alright Studio designed a clean, intuitive, editorial-style platform featuring a masthead-like logotype and universal sports iconography, creating a calmer user experience aligned with OffBall's positive content. 13. Wolff Olins Wolff Olins is a global brand consultancy with four main offices: London, New York, San Francisco, and Los Angeles. Known for their courageous, culturally relevant branding and forward-thinking strategy, they collaborate with large corporations and trailblazing organisations to create bold, authentic brand identities that resonate emotionally. A particular highlight of 2025 so far has been their collaboration with Leo Burnett to refresh Sandals Resorts' global brand with the "Made of Caribbean" campaign. This strategic move positions Sandals not merely as a luxury resort but as a cultural ambassador for the Caribbean. Wolff Olins developed a new visual identity called "Natural Vibrancy," integrating local influences with modern design to reflect a genuine connection to the islands' culture. This rebrand speaks to a growing traveller demand for authenticity and meaningful experiences, allowing Sandals to define itself as an extension of the Caribbean itself. 14. COLLINS Founded by Brian Collins, COLLINS is an independent branding and design consultancy based in the US, celebrated for its playful visual language, expressive storytelling and culturally rich identity systems. In the last few months, we've loved the new branding they designed for Barcelona's 25th Offf Festival, which departs from its usual consistent wordmark. The updated identity is inspired by the festival's role within the international creative community, and is rooted in the concept of 'Centre Offf Gravity'. This concept is visually expressed through the festival's name, which appears to exert a gravitational pull on the text boxes, causing them to "stick" to it. Additionally, the 'f's in the wordmark are merged into a continuous line reminiscent of a magnet, with the motion graphics further emphasising the gravitational pull as the name floats and other elements follow. 15. Studio Spass Studio Spass is a creative studio based in Rotterdam, the Netherlands, focused on vibrant and dynamic identity systems that reflect the diverse and multifaceted nature of cultural institutions. One of their recent landmark projects was Bigger, a large-scale typographic installation created for the Shenzhen Art Book Fair. Inspired by tear-off calendars and the physical act of reading, Studio Spass used 264 A4 books, with each page displaying abstract details, to create an evolving grid of colour and type. Visitors were invited to interact with the installation by flipping pages, constantly revealing new layers of design and a hidden message: "Enjoy books!" 16. Applied Design Works Applied Design Works is a New York studio that specialises in reshaping businesses through branding and design. They provide expertise in design, strategy, and implementation, with a focus on building long-term, collaborative relationships with their clients. We were thrilled by their recent work for Grand Central Madison (the station that connects Long Island to Grand Central Terminal), where they were instrumental in ushering in a new era for the transportation hub. Applied Design sought to create a commuter experience that imbued the spirit of New York, showcasing its diversity of thought, voice, and scale that befits one of the greatest cities in the world and one of the greatest structures in it. 17. The Chase The Chase Creative Consultants is a Manchester-based independent creative consultancy with over 35 years of experience, known for blending humour, purpose, and strong branding to rejuvenate popular consumer campaigns. "We're not designers, writers, advertisers or brand strategists," they say, "but all of these and more. An ideas-based creative studio." Recently, they were tasked with shaping the identity of York Central, a major urban regeneration project set to become a new city quarter for York. The Chase developed the identity based on extensive public engagement, listening to residents of all ages about their perceptions of the city and their hopes for the new area. The resulting brand identity uses linear forms that subtly reference York's famous railway hub, symbolising the long-standing connections the city has fostered. 18. A Practice for Everyday Life Based in London and founded by Kirsty Carter and Emma Thomas, A Practice for Everyday Life built a reputation as a sought-after collaborator with like-minded companies, galleries, institutions and individuals. Not to mention a conceptual rigour that ensures each design is meaningful and original. Recently, they've been working on the visual identity for Muzej Lah, a new international museum for contemporary art in Bled, Slovenia opening in 2026. This centres around a custom typeface inspired by the slanted geometry and square detailing of its concrete roof tiles. It also draws from European modernist typography and the experimental lettering of Jože Plečnik, one of Slovenia's most influential architects.⁠ A Practice for Everyday Life. Photo: Carol Sachs Alexey Brodovitch: Astonish Me publication design by A Practice for Everyday Life, 2024. Photo: Ed Park La Biennale di Venezia identity by A Practice for Everyday Life, 2022. Photo: Thomas Adank CAM – Centro de Arte Moderna Gulbenkian identity by A Practice for Everyday Life, 2024. Photo: Sanda Vučković 19. Studio Nari Studio Nari is a London-based creative and branding agency partnering with clients around the world to build "brands that truly connect with people". NARI stands, by the way, for Not Always Right Ideas. As they put it, "It's a name that might sound odd for a branding agency, but it reflects everything we believe." One landmark project this year has been a comprehensive rebrand for the electronic music festival Field Day. Studio Nari created a dynamic and evolving identity that reflects the festival's growth and its connection to the electronic music scene and community. The core idea behind the rebrand is a "reactive future", allowing the brand to adapt and grow with the festival and current trends while maintaining a strong foundation. A new, steadfast wordmark is at its centre, while a new marque has been introduced for the first time. 20. Beetroot Design Group Beetroot is a 25‑strong creative studio celebrated for its bold identities and storytelling-led approach. Based in Thessaloniki, Greece, their work spans visual identity, print, digital and motion, and has earned international recognition, including Red Dot Awards. Recently, they also won a Wood Pencil at the D&AD Awards 2025 for a series of posters created to promote live jazz music events. The creative idea behind all three designs stems from improvisation as a key feature of jazz. Each poster communicates the artist's name and other relevant information through a typographical "improvisation". 21. Kind Studio Kind Studio is an independent creative agency based in London that specialises in branding and digital design, as well as offering services in animation, creative and art direction, and print design. Their goal is to collaborate closely with clients to create impactful and visually appealing designs. One recent project that piqued our interest was a bilingual, editorially-driven digital platform for FC Como Women, a professional Italian football club. To reflect the club's ambition of promoting gender equality and driving positive social change within football, the new website employs bold typography, strong imagery, and an empowering tone of voice to inspire and disseminate its message. 22. Slug Global Slug Global is a creative agency and art collective founded by artist and musician Bosco (Brittany Bosco). Focused on creating immersive experiences "for both IRL and URL", their goal is to work with artists and brands to establish a sustainable media platform that embodies the values of young millennials, Gen Z and Gen Alpha. One of Slug Global's recent projects involved a collaboration with SheaMoisture and xoNecole for a three-part series called The Root of It. This series celebrates black beauty and hair, highlighting its significance as a connection to ancestry, tradition, blueprint and culture for black women. 23. Little Troop New York studio Little Troop crafts expressive and intimate branding for lifestyle, fashion, and cultural clients. Led by creative directors Noemie Le Coz and Jeremy Elliot, they're known for their playful and often "kid-like" approach to design, drawing inspiration from their own experiences as 90s kids. One of their recent and highly acclaimed projects is the visual identity for MoMA's first-ever family festival, Another World. Little Troop was tasked with developing a comprehensive visual identity that would extend from small items, such as café placemats, to large billboards. Their designs were deliberately a little "dream-like" and relied purely on illustration to sell the festival without needing photography. Little Troop also carefully selected seven colours from MoMA's existing brand guidelines to strike a balance between timelessness, gender neutrality, and fun. 24. Morcos Key Morcos Key is a Brooklyn-based design studio co-founded by Jon Key and Wael Morcos. Collaborating with a diverse range of clients, including arts and cultural institutions, non-profits and commercial enterprises, they're known for translating clients' stories into impactful visual systems through thoughtful conversation and formal expression. One notable project is their visual identity work for Hammer & Hope, a magazine that focuses on politics and culture within the black radical tradition. For this project, Morcos Key developed not only the visual identity but also a custom all-caps typeface to reflect the publication's mission and content. 25. Thirst Thirst, also known as Thirst Craft, is an award-winning strategic drinks packaging design agency based in Glasgow, Scotland, with additional hubs in London and New York. Founded in 2015 by Matthew Stephen Burns and Christopher John Black, the company specializes in building creatively distinctive and commercially effective brands for the beverage industry. To see what they're capable of, check out their work for SKYY Vodka. The new global visual identity system, called Audacious Glamour', aims to unify SKYY under a singular, powerful idea. The visual identity benefits from bolder framing, patterns, and a flavour-forward colour palette to highlight each product's "juicy attitude", while the photography style employs macro shots and liquid highlights to convey a premium feel.
    Like
    Love
    Wow
    Angry
    Sad
    478
    0 Yorumlar 0 hisse senetleri
  • Studio555 raises $4.6M to build playable app for interior design

    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King.
    Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise.
    The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.”
    Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    #studio555 #raises #46m #build #playable
    Studio555 raises $4.6M to build playable app for interior design
    Studio555 announced today that it has raised €4 million, or about million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.” #studio555 #raises #46m #build #playable
    VENTUREBEAT.COM
    Studio555 raises $4.6M to build playable app for interior design
    Studio555 announced today that it has raised €4 million, or about $4.6 million in a seed funding round. It plans to put this funding towards creating a playable app, a game-like experience focused on interior design. HOF Capital and Failup Ventures led the round, with participation from the likes of Timo Soininen, co-founder of Small Giant Games; Mikko Kodisoja, co-founder of Supercell; and Riccardo Zacconi, co-founder of King. Studio555’s founders include entrepreneur Joel Roos, now the CEO, CTO Stina Larsson and CPO Axel Ullberger. The latter two formerly worked at King on the development of Candy Crush Saga. According to these founders, the app in development combines interior design with the design and consumer appeal of games and social apps. Users can create and design personal spaces without needing any technical expertise. The team plans to launch the app next year, and it plans to put its seed funding towards product development and growing its team. Roos said in a statement, “At Studio555, we’re reimagining interior design as something anyone can explore: open-ended, playful, and personal. We’re building an experience we always wished existed: a space where creativity is hands-on, social, and free from rigid rules. This funding is a major step forward in setting an entirely new category for creative expression.” Investor Timo Soininen said in a statement, “Studio555 brings together top-tier gaming talent and design vision. This team has built global hits before, and now they’re applying that experience to something completely fresh – think Pinterest in 3D meets TikTok, but for interiors. I’m honored to support Joel and this team with their rare mix of creativity, technical competence, and focus on execution.”
    Like
    Love
    Wow
    Angry
    Sad
    428
    2 Yorumlar 0 hisse senetleri
  • Inside the thinking behind Frontify Futures' standout brand identity

    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds.
    This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape.
    But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material.
    Endless variation
    What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project.

    "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser."
    Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system."
    One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg."

    Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion."
    Core Philosophy
    In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.'
    "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past."

    The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?"
    Stripped back and skeletal typography
    The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type."
    Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form."

    As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly."
    In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font."
    Design Process
    The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base.
    "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content.
    "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together."

    In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality.
    "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point.
    A provocation for the industry
    In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking.
    For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium.
    This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    #inside #thinking #behind #frontify #futures039
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azuriofrom Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone. #inside #thinking #behind #frontify #futures039
    WWW.CREATIVEBOOM.COM
    Inside the thinking behind Frontify Futures' standout brand identity
    Who knows where branding will go in the future? However, for many of us working in the creative industries, it's our job to know. So it's something we need to start talking about, and Frontify Futures wants to be the platform where that conversation unfolds. This ambitious new thought leadership initiative from Frontify brings together an extraordinary coalition of voices—CMOs who've scaled global brands, creative leaders reimagining possibilities, strategy directors pioneering new approaches, and cultural forecasters mapping emerging opportunities—to explore how effectiveness, innovation, and scale will shape tomorrow's brand-building landscape. But Frontify Futures isn't just another content platform. Excitingly, from a design perspective, it's also a living experiment in what brand identity can become when technology meets craft, when systems embrace chaos, and when the future itself becomes a design material. Endless variation What makes Frontify Futures' typography unique isn't just its custom foundation: it's how that foundation enables endless variation and evolution. This was primarily achieved, reveals developer and digital art director Daniel Powell, by building bespoke tools for the project. "Rather than rely solely on streamlined tools built for speed and production, we started building our own," he explains. "The first was a node-based design tool that takes our custom Frame and Hairline fonts as a base and uses them as the foundations for our type generator. With it, we can generate unique type variations for each content strand—each article, even—and create both static and animated type, exportable as video or rendered live in the browser." Each of these tools included what Daniel calls a "chaos element: a small but intentional glitch in the system. A microstatement about the nature of the future: that it can be anticipated but never fully known. It's our way of keeping gesture alive inside the system." One of the clearest examples of this is the colour palette generator. "It samples from a dynamic photo grid tied to a rotating colour wheel that completes one full revolution per year," Daniel explains. "But here's the twist: wind speed and direction in St. Gallen, Switzerland—Frontify's HQ—nudges the wheel unpredictably off-centre. It's a subtle, living mechanic; each article contains a log of the wind data in its code as a kind of Easter Egg." Another favourite of Daniel's—yet to be released—is an expanded version of Conway's Game of Life. "It's been running continuously for over a month now, evolving patterns used in one of the content strand headers," he reveals. "The designer becomes a kind of photographer, capturing moments from a petri dish of generative motion." Core Philosophy In developing this unique identity, two phrases stood out to Daniel as guiding lights from the outset. The first was, 'We will show, not tell.' "This became the foundation for how we approached the identity," recalls Daniel. "It had to feel like a playground: open, experimental, and fluid. Not overly precious or prescriptive. A system the Frontify team could truly own, shape, and evolve. A platform, not a final product. A foundation, just as the future is always built on the past." The second guiding phrase, pulled directly from Frontify's rebrand materials, felt like "a call to action," says Daniel. "'Gestural and geometric. Human and machine. Art and science.' It's a tension that feels especially relevant in the creative industries today. As technology accelerates, we ask ourselves: how do we still hold onto our craft? What does it mean to be expressive in an increasingly systemised world?" Stripped back and skeletal typography The identity that Daniel and his team created reflects these themes through typography that literally embodies the platform's core philosophy. It really started from this idea of the past being built upon the 'foundations' of the past," he explains. "At the time Frontify Futures was being created, Frontify itself was going through a rebrand. With that, they'd started using a new variable typeface called Cranny, a custom cut of Azurio by Narrow Type." Daniel's team took Cranny and "pushed it into a stripped-back and almost skeletal take". The result was Crany-Frame and Crany-Hairline. "These fonts then served as our base scaffolding," he continues. "They were never seen in design, but instead, we applied decoration them to produce new typefaces for each content strand, giving the identity the space to grow and allow new ideas and shapes to form." As Daniel saw it, the demands on the typeface were pretty simple. "It needed to set an atmosphere. We needed it needed to feel alive. We wanted it to be something shifting and repositioning. And so, while we have a bunch of static cuts of each base style, we rarely use them; the typefaces you see on the website and social only exist at the moment as a string of parameters to create a general style that we use to create live animating versions of the font generated on the fly." In addition to setting the atmosphere, it needed to be extremely flexible and feature live inputs, as a significant part of the branding is about the unpredictability of the future. "So Daniel's team built in those aforementioned "chaos moments where everything from user interaction to live windspeeds can affect the font." Design Process The process of creating the typefaces is a fascinating one. "We started by working with the custom cut of Azurio (Cranny) from Narrow Type. We then redrew it to take inspiration from how a frame and a hairline could be produced from this original cut. From there, we built a type generation tool that uses them as a base. "It's a custom node-based system that lets us really get in there and play with the overlays for everything from grid-sizing, shapes and timing for the animation," he outlines. "We used this tool to design the variants for different content strands. We weren't just designing letterforms; we were designing a comprehensive toolset that could evolve in tandem with the content. "That became a big part of the process: designing systems that designers could actually use, not just look at; again, it was a wider conversation and concept around the future and how designers and machines can work together." In short, the evolution of the typeface system reflects the platform's broader commitment to continuous growth and adaptation." The whole idea was to make something open enough to keep building on," Daniel stresses. "We've already got tools in place to generate new weights, shapes and animated variants, and the tool itself still has a ton of unused functionality. "I can see that growing as new content strands emerge; we'll keep adapting the type with them," he adds. "It's less about version numbers and more about ongoing movement. The system's alive; that's the point. A provocation for the industry In this context, the Frontify Futures identity represents more than smart visual branding; it's also a manifesto for how creative systems might evolve in an age of increasing automation and systematisation. By building unpredictability into their tools, embracing the tension between human craft and machine precision, and creating systems that grow and adapt rather than merely scale, Daniel and the Frontify team have created something that feels genuinely forward-looking. For creatives grappling with similar questions about the future of their craft, Frontify Futures offers both inspiration and practical demonstration. It shows how brands can remain human while embracing technological capability, how systems can be both consistent and surprising, and how the future itself can become a creative medium. This clever approach suggests that the future of branding lies not in choosing between human creativity and systematic efficiency but in finding new ways to make them work together, creating something neither could achieve alone.
    0 Yorumlar 0 hisse senetleri
  • NVIDIA helps Germany lead Europe’s AI manufacturing race

    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #nvidia #helps #germany #lead #europes
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory”will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #nvidia #helps #germany #lead #europes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    NVIDIA helps Germany lead Europe’s AI manufacturing race
    Germany and NVIDIA are building possibly the most ambitious European tech project of the decade: the continent’s first industrial AI cloud.NVIDIA has been on a European tour over the past month with CEO Jensen Huang charming audiences at London Tech Week before dazzling the crowds at Paris’s VivaTech. But it was his meeting with German Chancellor Friedrich Merz that might prove the most consequential stop.The resulting partnership between NVIDIA and Deutsche Telekom isn’t just another corporate handshake; it’s potentially a turning point for European technological sovereignty.An “AI factory” (as they’re calling it) will be created with a focus on manufacturing, which is hardly surprising given Germany’s renowned industrial heritage. The facility aims to give European industrial players the computational firepower to revolutionise everything from design to robotics.“In the era of AI, every manufacturer needs two factories: one for making things, and one for creating the intelligence that powers them,” said Huang. “By building Europe’s first industrial AI infrastructure, we’re enabling the region’s leading industrial companies to advance simulation-first, AI-driven manufacturing.”It’s rare to hear such urgency from a telecoms CEO, but Deutsche Telekom’s Timotheus Höttges added: “Europe’s technological future needs a sprint, not a stroll. We must seize the opportunities of artificial intelligence now, revolutionise our industry, and secure a leading position in the global technology competition. Our economic success depends on quick decisions and collaborative innovations.”The first phase alone will deploy 10,000 NVIDIA Blackwell GPUs spread across various high-performance systems. That makes this Germany’s largest AI deployment ever; a statement the country isn’t content to watch from the sidelines as AI transforms global industry.A Deloitte study recently highlighted the critical importance of AI technology development to Germany’s future competitiveness, particularly noting the need for expanded data centre capacity. When you consider that demand is expected to triple within just five years, this investment seems less like ambition and more like necessity.Robots teaching robotsOne of the early adopters is NEURA Robotics, a German firm that specialises in cognitive robotics. They’re using this computational muscle to power something called the Neuraverse which is essentially a connected network where robots can learn from each other.Think of it as a robotic hive mind for skills ranging from precision welding to household ironing, with each machine contributing its learnings to a collective intelligence.“Physical AI is the electricity of the future—it will power every machine on the planet,” said David Reger, Founder and CEO of NEURA Robotics. “Through this initiative, we’re helping build the sovereign infrastructure Europe needs to lead in intelligent robotics and stay in control of its future.”The implications of this AI project for manufacturing in Germany could be profound. This isn’t just about making existing factories slightly more efficient; it’s about reimagining what manufacturing can be in an age of intelligent machines.AI for more than just Germany’s industrial titansWhat’s particularly promising about this project is its potential reach beyond Germany’s industrial titans. The famed Mittelstand – the network of specialised small and medium-sized businesses that forms the backbone of the German economy – stands to benefit.These companies often lack the resources to build their own AI infrastructure but possess the specialised knowledge that makes them perfect candidates for AI-enhanced innovation. Democratising access to cutting-edge AI could help preserve their competitive edge in a challenging global market.Academic and research institutions will also gain access, potentially accelerating innovation across numerous fields. The approximately 900 Germany-based startups in NVIDIA’s Inception program will be eligible to use these resources, potentially unleashing a wave of entrepreneurial AI applications.However impressive this massive project is, it’s viewed merely as a stepping stone towards something even more ambitious: Europe’s AI gigafactory. This planned 100,000 GPU-powered initiative backed by the EU and Germany won’t come online until 2027, but it represents Europe’s determination to carve out its own technological future.As other European telecom providers follow suit with their own AI infrastructure projects, we may be witnessing the beginning of a concerted effort to establish technological sovereignty across the continent.For a region that has often found itself caught between American tech dominance and Chinese ambitions, building indigenous AI capability represents more than economic opportunity. Whether this bold project in Germany will succeed remains to be seen, but one thing is clear: Europe is no longer content to be a passive consumer of AI technology developed elsewhere.(Photo by Maheshkumar Painam)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Yorumlar 0 hisse senetleri
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Yorumlar 0 hisse senetleri
  • Xbox to launch hi-tech handheld gaming devices to take on Nintendo Switch 2

    You’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters on the moveTech14:04, 12 Jun 2025The new Xbox handheld consoleXbox has new hi-tech handheld gaming devices on the way to take on Switch 2.Teaming up with Asus, the ROG Xbox Ally X is all about on-the-go gaming, with a full immersive Xbox experience for the first time in handheld and a packed gaming library with access to installed games from leading PC storefronts and the console firm.‌Using Windows 11 software, you’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters.‌It’ll include a dedicated Xbox button for chat, apps, and settings through an enhanced Game Bar overlay.And the machine will have the latest AMD Ryzen chipset technology so that it’s super powerful, with a 7in 1080p display and up to a terabyte storage under the hood for plenty of download space.There will be two devices though, the Ally as well as the Ally X.Article continues belowThe Xbox Ally XREAD MORE: June 2025's biggest new game releases for console and PC, including Nintendo Switch 2READ MORE: Does Nintendo Switch 2 have a YouTube app? All we know so farThe difference is chip quality and storage, so the Ally will likely be cheaper. We have no prices yet. They are expected to hit stores by Christmas.A spokesman said: “Everything at Xbox starts with the player. That’s why we’ve dedicated years to reimagining how to make it easier to enjoy the games you love—wherever you are—through Xbox Play Anywhere, Game Pass, Xbox Cloud Gaming, Remote Play, and more. Whether you’re at home or on the go, your favorite games should follow you.‌You'll be able to play Call of Duty on the goREAD MORE: Daily Star's newsletter brings you the biggest and best stories – sign up todayASUS shares that same commitment. Known for pushing the boundaries of handheld gaming, ASUS is similarly driven by innovation that delivers high-performance experiences that put players first.“Together, we’ve combined our strengths and technical expertise to introduce something entirely new .‌“These handhelds are built to make it easier than ever to access your favourite games—from Xbox, Battle.net, and other leading PC storefronts—all from a single device.”It comes after the Nintendo Switch 2 sold more than 3.5 million units worldwide in its first week, becoming the fastest-selling Nintendo hardware ever.Luciano Pereña, CEO and President of Nintendo of Europe, said: "Nintendo Switch 2 represents the next evolution of Nintendo Switch, and we’re very happy and grateful to see it already being embraced by so many players across Europe.Article continues below“We look forward to seeing players connecting through games like Mario Kart World, sharing the experience with friends and family whether near or far.”‌‌‌
    #xbox #launch #hitech #handheld #gaming
    Xbox to launch hi-tech handheld gaming devices to take on Nintendo Switch 2
    You’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters on the moveTech14:04, 12 Jun 2025The new Xbox handheld consoleXbox has new hi-tech handheld gaming devices on the way to take on Switch 2.Teaming up with Asus, the ROG Xbox Ally X is all about on-the-go gaming, with a full immersive Xbox experience for the first time in handheld and a packed gaming library with access to installed games from leading PC storefronts and the console firm.‌Using Windows 11 software, you’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters.‌It’ll include a dedicated Xbox button for chat, apps, and settings through an enhanced Game Bar overlay.And the machine will have the latest AMD Ryzen chipset technology so that it’s super powerful, with a 7in 1080p display and up to a terabyte storage under the hood for plenty of download space.There will be two devices though, the Ally as well as the Ally X.Article continues belowThe Xbox Ally XREAD MORE: June 2025's biggest new game releases for console and PC, including Nintendo Switch 2READ MORE: Does Nintendo Switch 2 have a YouTube app? All we know so farThe difference is chip quality and storage, so the Ally will likely be cheaper. We have no prices yet. They are expected to hit stores by Christmas.A spokesman said: “Everything at Xbox starts with the player. That’s why we’ve dedicated years to reimagining how to make it easier to enjoy the games you love—wherever you are—through Xbox Play Anywhere, Game Pass, Xbox Cloud Gaming, Remote Play, and more. Whether you’re at home or on the go, your favorite games should follow you.‌You'll be able to play Call of Duty on the goREAD MORE: Daily Star's newsletter brings you the biggest and best stories – sign up todayASUS shares that same commitment. Known for pushing the boundaries of handheld gaming, ASUS is similarly driven by innovation that delivers high-performance experiences that put players first.“Together, we’ve combined our strengths and technical expertise to introduce something entirely new .‌“These handhelds are built to make it easier than ever to access your favourite games—from Xbox, Battle.net, and other leading PC storefronts—all from a single device.”It comes after the Nintendo Switch 2 sold more than 3.5 million units worldwide in its first week, becoming the fastest-selling Nintendo hardware ever.Luciano Pereña, CEO and President of Nintendo of Europe, said: "Nintendo Switch 2 represents the next evolution of Nintendo Switch, and we’re very happy and grateful to see it already being embraced by so many players across Europe.Article continues below“We look forward to seeing players connecting through games like Mario Kart World, sharing the experience with friends and family whether near or far.”‌‌‌ #xbox #launch #hitech #handheld #gaming
    WWW.DAILYSTAR.CO.UK
    Xbox to launch hi-tech handheld gaming devices to take on Nintendo Switch 2
    You’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters on the moveTech14:04, 12 Jun 2025The new Xbox handheld consoleXbox has new hi-tech handheld gaming devices on the way to take on Switch 2.Teaming up with Asus, the ROG Xbox Ally X is all about on-the-go gaming, with a full immersive Xbox experience for the first time in handheld and a packed gaming library with access to installed games from leading PC storefronts and the console firm.‌Using Windows 11 software, you’ll be able to play all the best Xbox exclusives like Call of Duty, Fable and Halo as well as third-party hit blockbusters.‌It’ll include a dedicated Xbox button for chat, apps, and settings through an enhanced Game Bar overlay.And the machine will have the latest AMD Ryzen chipset technology so that it’s super powerful, with a 7in 1080p display and up to a terabyte storage under the hood for plenty of download space.There will be two devices though, the Ally as well as the Ally X.Article continues belowThe Xbox Ally XREAD MORE: June 2025's biggest new game releases for console and PC, including Nintendo Switch 2READ MORE: Does Nintendo Switch 2 have a YouTube app? All we know so farThe difference is chip quality and storage, so the Ally will likely be cheaper. We have no prices yet. They are expected to hit stores by Christmas.A spokesman said: “Everything at Xbox starts with the player. That’s why we’ve dedicated years to reimagining how to make it easier to enjoy the games you love—wherever you are—through Xbox Play Anywhere, Game Pass, Xbox Cloud Gaming (Beta), Remote Play, and more. Whether you’re at home or on the go, your favorite games should follow you.‌You'll be able to play Call of Duty on the goREAD MORE: Daily Star's newsletter brings you the biggest and best stories – sign up todayASUS shares that same commitment. Known for pushing the boundaries of handheld gaming, ASUS is similarly driven by innovation that delivers high-performance experiences that put players first.“Together, we’ve combined our strengths and technical expertise to introduce something entirely new .‌“These handhelds are built to make it easier than ever to access your favourite games—from Xbox, Battle.net, and other leading PC storefronts—all from a single device.”It comes after the Nintendo Switch 2 sold more than 3.5 million units worldwide in its first week, becoming the fastest-selling Nintendo hardware ever.Luciano Pereña, CEO and President of Nintendo of Europe, said: "Nintendo Switch 2 represents the next evolution of Nintendo Switch, and we’re very happy and grateful to see it already being embraced by so many players across Europe.Article continues below“We look forward to seeing players connecting through games like Mario Kart World, sharing the experience with friends and family whether near or far.”‌‌‌
    0 Yorumlar 0 hisse senetleri
  • Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker

    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear.
    Building Buzz Online
    The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on.
    Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.
    Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit.
    Reimagining the Air Max Legacy
    Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld.
    Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic.
    Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components.
    Expansion of 3D Printed Footwear Technology
    The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect.
    Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process.
    Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now.
    Who won the2024 3D Printing Industry Awards?
    Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news.
    You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content.
    Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth.

    Paloma Duran
    Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    #nike #introduces #air #max #its
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately This model marks a step in Nike’s exploration of additive manufacturing, enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1, the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabricationprocess enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantageevent on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future. #nike #introduces #air #max #its
    3DPRINTINGINDUSTRY.COM
    Nike Introduces the Air Max 1000 its First Fully 3D Printed Sneaker
    Global sportswear leader Nike is reportedly preparing to release the Air Max 1000 Oatmeal, its first fully 3D printed sneaker, with a launch tentatively scheduled for Summer 2025. While Nike has yet to confirm an official release date, industry sources suggest the debut may occur sometime between June and August. The retail price is expected to be approximately $210. This model marks a step in Nike’s exploration of additive manufacturing (AM), enabled through a collaboration with Zellerfeld, a German startup known for its work in fully 3D printed footwear. Building Buzz Online The “Oatmeal” colorway—a neutral blend of soft beige tones—has already attracted attention on social platforms like TikTok, Instagram, and X. In April, content creator Janelle C. Shuttlesworth described the shoes as “light as air” in a video preview. Sneaker-focused accounts such as JustFreshKicks and TikTok user @shoehefner5 have also offered early walkthroughs. Among fans, the nickname “Foamy Oat” has started to catch on. Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Before generating buzz online, the sneaker made a public appearance at ComplexCon Las Vegas in November 2024. There, its laceless, sculptural silhouette and smooth, seamless texture stood out—merging futuristic design with signature Air Max elements, such as the visible heel air unit. Reimagining the Air Max Legacy Drawing inspiration from the original Air Max 1 (1987), the Air Max 1000 retains the iconic air cushion in the heel while reinventing the rest of the structure using 3D printing. The shoe’s upper and outsole are formed as a single, continuous piece, produced from ZellerFoam, a proprietary flexible material developed by Zellerfeld. Zellerfeld’s fused filament fabrication (FFF) process enables varied material densities throughout the shoe—resulting in a firm, supportive sole paired with a lightweight, breathable upper. The laceless, slip-on design prioritizes ease of wear while reinforcing a sleek, minimalist aesthetic. Nike’s Chief Innovation Officer, John Hoke, emphasized the broader impact of the design, noting that the Air Max 1000 “opens up new creative possibilities” and achieves levels of precision and contouring not possible with traditional footwear manufacturing. He also pointed to the sustainability benefits of AM, which produces minimal waste by fabricating only the necessary components. Expansion of 3D Printed Footwear Technology The Air Max 1000 joins a growing lineup of 3D printed footwear innovations from major brands. Gucci, the Italian luxury brand known for blending traditional craftsmanship with modern techniques, unveiled several Cub3d sneakers as part of its Spring Summer 2025 (SS25) collection. The brand developed Demetra, a material made from at least 70% plant-based ingredients, including viscose, wood pulp, and bio-based polyurethane. The bi-material sole combines an EVA-filled interior for cushioning and a TPU exterior, featuring an Interlocking G pattern that creates a 3D effect. Elsewhere, Syntilay, a footwear company combining artificial intelligence with 3D printing, launched a range of custom-fit slides. These slides are designed using AI-generated 3D models, starting with sketch-based concepts that are refined through AI platforms and then transformed into digital 3D designs. The company offers sizing adjustments based on smartphone foot scans, which are integrated into the manufacturing process. Join our Additive Manufacturing Advantage (AMAA) event on July 10th, where AM leaders from Aerospace, Space, and Defense come together to share mission-critical insights. Online and free to attend.Secure your spot now. Who won the2024 3D Printing Industry Awards? Subscribe to the 3D Printing Industry newsletterto keep up with the latest 3D printing news. You can also follow us onLinkedIn, and subscribe to the 3D Printing Industry Youtube channel to access more exclusive content. Featured image shows Nike’s 3D printed Air Max 1000 Oatmeal. Photo via Janelle C. Shuttlesworth. Paloma Duran Paloma Duran holds a BA in International Relations and an MA in Journalism. Specializing in writing, podcasting, and content and event creation, she works across politics, energy, mining, and technology. With a passion for global trends, Paloma is particularly interested in the impact of technology like 3D printing on shaping our future.
    0 Yorumlar 0 hisse senetleri