• NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE

    By TREVOR HOGG

    Images courtesy of Chocolate Tribe.

    Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe

    After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures.

    “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!”

    Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024.

    A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?”

    Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle.

    With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.”

    Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire.

    Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting.

    Stella Gono, Software Developer, working on the Chocolate Tribe website.

    Family photo of the Maketos. Maketo-van de Bragt has two siblings.

    Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.”

    AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!”

    Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.”

    South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024.

    There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.”

    Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years.

    Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024.

    Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.”

    Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    #nosipho #maketovan #den #bragt #altered
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebeand friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI,” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Goldfor Netflix.Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.” #nosipho #maketovan #den #bragt #altered
    WWW.VFXVOICE.COM
    NOSIPHO MAKETO-VAN DEN BRAGT ALTERED HER CAREER PATH TO LAUNCH CHOCOLATE TRIBE
    By TREVOR HOGG Images courtesy of Chocolate Tribe. Nosipho Maketo-van den Bragt, Owner and CEO, Chocolate Tribe After initially pursuing a career as an attorney, Nosipho Maketo-van den Bragt discovered her true calling was to apply her legal knowledge in a more artistic endeavor with her husband, Rob Van den Bragt, who had forged a career as a visual effects supervisor. The couple co-founded Chocolate Tribe, the Johannesburg and Cape Town-based visual effects and animation studio that has done work for Netflix, BBC, Disney and Voltage Pictures. “It was following my passion and my passion finding me,” observes Maketo-van den Bragt, Owner and CEO of Chocolate Tribe and Founder of AVIJOZI. “I grew up in Soweto, South Africa, and we had this old-fashioned television. I was always fascinated by how those people got in there to perform and entertain us. Living in the townships, you become the funnel for your parents’ aspirations and dreams. My dad was a judge’s registrar, so he was writing all of the court cases coming up for a judge. My dad would come home and tell us stories of what happened in court. I found this enthralling, funny and sometimes painful because it was about people’s lives. I did law and to some extent still practice it. My legal career and entertainment media careers merged because I fell in love with the storytelling aspect of it all. There are those who say that lawyers are failed actors!” Chocolate Tribe hosts what has become the annual AVIJOZI festival with Netflix. AVIJOZI is a two-day, free-access event in Johannesburg focused on Animation/Film, Visual Effects and Interactive Technology. This year’s AVIJOZI is scheduled for September 13-14 in Johannesburg. Photo: Casting Director and Actor Spaces Founder Ayanda Sithebe (center in black T-shirt) and friends at AVIJOZI 2024. A personal ambition was to find a way to merge married life into a professional partnership. “I never thought that a lawyer and a creative would work together,” admits Maketo-van den Bragt. “However, Rob and I had this great love for watching films together and music; entertainment was the core fabric of our relationship. That was my first gentle schooling into the visual effects and animation content development space. Starting the company was due to both of us being out of work. I had quit my job without any sort of plan B. I actually incorporated Chocolate Tribe as a company without knowing what we would do with it. As time went on, there was a project that we were asked to come to do. The relationship didn’t work out, so Rob and I decided, ‘Okay, it seems like we can do this on our own.’ I’ve read many books about visual effects and animation, and I still do. I attend a lot of festivals. I am connected with a lot of the guys who work in different visual effects spaces because it is all about understanding how it works and, from a business side, how can we leverage all of that information?” Chocolate Tribe provided VFX and post-production for Checkers supermarket’s “Planet” ad promoting environmental sustainability. The Chocolate Tribe team pushed photorealism for the ad, creating three fully CG creatures: a polar bear, orangutan and sea turtle. With a population of 1.5 billion, there is no shortage of consumers and content creators in Africa. “Nollywood is great because it shows us that even with minimal resources, you can create a whole movement and ecosystem,” Maketo-van den Bragt remarks. “Maybe the question around Nollywood is making sure that the caliber and quality of work is high end and speaks to a global audience. South Africa has the same dynamics. It’s a vibrant traditional film and animation industry that grows in leaps and bounds every year. More and more animation houses are being incorporated or started with CEOs or managing directors in their 20s. There’s also an eagerness to look for different stories which haven’t been told. Africa gives that opportunity to tell stories that ordinary people, for example, in America, have not heard or don’t know about. There’s a huge rise in animation, visual effects and content in general.” Rob van den Bragt served as Creative Supervisor and Nosipho Maketo-van den Bragt as Studio Executive for the “Surf Sangoma” episode of the Disney+ series Kizazi Moto: Generation Fire. Rob van den Bragt, CCO, and Nosipho Maketo-van den Bragt, CEO, Co-Founders of Chocolate Tribe, in an AVIJOZI planning meeting. Stella Gono, Software Developer, working on the Chocolate Tribe website. Family photo of the Maketos. Maketo-van de Bragt has two siblings. Film tax credits have contributed to The Woman King, Dredd, Safe House, Black Sails and Mission: Impossible – Final Reckoning shooting in South Africa. “People understand principal photography, but there is confusion about animation and visual effects,” Maketo-van den Bragt states. “Rebates pose a challenge because now you have to go above and beyond to explain what you are selling. It’s taken time for the government to realize this is a viable career.” The streamers have had a positive impact. “For the most part, Netflix localizes, and that’s been quite a big hit because it speaks to the demographics and local representation and uplifts talent within those geographical spaces. We did one of the shorts for Disney’s Kizazi Moto: Generation Fire, and there was huge global excitement to that kind of anthology coming from Africa. We’ve worked on a number of collaborations with the U.K., and often that melding of different partners creates a fusion of universality. We need to tell authentic stories, and that authenticity will be dictated by the voices in the writing room.” AVIJOZI was established to support the development of local talent in animation, visual effects, film production and gaming. “AVIJOZI stands for Animation Visual Effects Interactive in JOZI [nickname for Johannesburg],” Maketo-van den Bragt explains. “It is a conference as well as a festival. The conference part is where we have networking sessions, panel discussions and behind-the-scenes presentations to draw the curtain back and show what happens when people create avatars. We want to show the next generation that there is a way to do this magical craft. The festival part is people have film screenings and music as well. We’ve brought in gaming as an integral aspect, which attracts many young people because that’s something they do at an early age. Gaming has become the common sport. AVIJOVI is in its fourth year now. It started when I got irritated by people constantly complaining, ‘Nothing ever happens in Johannesburg in terms of animation and visual effects.’ Nobody wanted to do it. So, I said, ‘I’ll do it.’ I didn’t know what I was getting myself into, and four years later I have lots of gray hair!” Rob van den Bragt served as Animation Supervisor/Visual Effects Supervisor and Nosipho Maketo-van den Bragt as an Executive Producer on iNumber Number: Jozi Gold (2023) for Netflix. (Image courtesy of Chocolate Tribe and Netflix) Mentorship and internship programs have been established with various academic institutions, and while there are times when specific skills are being sought, like rigging, the field of view tends to be much wider. “What we are finding is that the people who have done other disciplines are much more vibrant,” Maketo-van den Bragt states. “Artists don’t always know how to communicate because it’s all in their heads. Sometimes, somebody with a different background can articulate that vision a bit better because they have those other skills. We also find with those who have gone to art school that the range within their artistry and craftsmanship has become a ‘thing.’ When you have mentally traveled where you have done other things, it allows you to be a more well-rounded artist because you can pull references from different walks of life and engage with different topics without being constrained to one thing. We look for people with a plethora of skills and diverse backgrounds. It’s a lot richer as a Chocolate Tribe. There are multiple flavors.” South African director/producer/cinematographer and drone cinemtography specialist FC Hamman, Founder of FC Hamman Films, at AVIJOZI 2024. There is a particular driving force when it comes to mentoring. “I want to be the mentor I hoped for,” Maketo-van den Bragt remarks. “I have silent mentors in that we didn’t formalize the relationship, but I knew they were my mentors because every time I would encounter an issue, I would be able to call them. One of the people who not only mentored but pushed me into different spaces is Jinko Gotoh, who is part of Women in Animation. She brought me into Women in Animation, and I had never mentored anybody. Here I was, sitting with six women who wanted to know how I was able to build up Chocolate Tribe. I didn’t know how to structure a presentation to tell them about the journey because I had been so focused on the journey. It’s a sense of grit and feeling that I cannot fail because I have a whole community that believes in me. Even when I felt my shoulders sagging, they would be there to say, ‘We need this. Keep it moving.’ This isn’t just about me. I have a whole stream of people who want this to work.” Netflix VFX Manager Ben Perry, who oversees Netflix’s VFX strategy across Africa, the Middle East and Europe, at AVIJOZI 2024. Netflix was a partner in AVIJOZI with Chocolate Tribe for three years. Zama Mfusi, Founder of IndiLang, and Isabelle Rorke, CEO of Dreamforge Creative and Deputy Chair of Animation SA, at AVIJOZI 2024. Numerous unknown factors had to be accounted for, which made predicting how the journey would unfold extremely difficult. “What it looks like and what I expected it to be, you don’t have the full sense of what it would lead to in this situation,” Maketo-van den Bragt states. “I can tell you that there have been moments of absolute joy where I was so excited we got this project or won that award. There are other moments where you feel completely lost and ask yourself, ‘Am I doing the right thing?’ The journey is to have the highs, lows and moments of confusion. I go through it and accept that not every day will be an award-winning day. For the most part, I love this journey. I wanted to be somewhere where there was a purpose. What has been a big highlight is when I’m signing a contract for new employees who are excited about being part of Chocolate Tribe. Also, when you get a new project and it’s exciting, especially from a service or visual effects perspective, we’re constantly looking for that dragon or big creature. It’s about being mesmerizing, epic and awesome.” Maketo-van den Bragt has two major career-defining ambitions. “Fostering the next generation of talent and making sure that they are ready to create these amazing stories properly – that is my life work, and relating the African narrative to let the world see the human aspect of who we are because for the longest time we’ve been written out of the stories and narratives.”
    Like
    Love
    Wow
    Angry
    Sad
    397
    0 Yorumlar 0 hisse senetleri
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Yorumlar 0 hisse senetleri
  • Too big, fail too

    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship. that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”: Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future. stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    #too #big #fail
    Too big, fail too
    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship. that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”: Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future. stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story. #too #big #fail
    UXDESIGN.CC
    Too big, fail too
    Inside Apple’s high-gloss standoff with AI ambition and the uncanny choreography of WWDC 2025There was a time when watching an Apple keynote — like Steve Jobs introducing the iPhone in 2007, the masterclass of all masterclasses in product launching — felt like watching a tightrope act. There was suspense. Live demos happened — sometimes they failed, and when they didn’t, the applause was real, not piped through a Dolby mix.These days, that tension is gone. Since 2020, in the wake of the pandemic, Apple events have become pre-recorded masterworks: drone shots sweeping over Apple Park, transitions smoother than a Pixar short, and executives delivering their lines like odd, IRL spatial personas. They move like human renderings: poised, confident, and just robotic enough to raise a brow. The kind of people who, if encountered in real life, would probably light up half a dozen red flags before a handshake is even offered. A case in point: the official “Liquid Glass” UI demo — it’s visually stunning, yes, but also uncanny, like a concept reel that forgot it needed to ship.https://medium.com/media/fcb3b16cc42621ba32153aff80ea1805/hrefAnd that’s the paradox. Not only has Apple trimmed down the content of WWDC, it’s also polished the delivery into something almost inhumanly controlled. Every keynote beat feels engineered to avoid risk, reduce friction, and glide past doubt. But in doing so, something vital slips away: the tension, the spontaneity, the sense that the future is being made, not just performed.Just one year earlier, WWDC 2024 opened with a cinematic cold open “somewhere over California”:https://medium.com/media/f97f45387353363264d99c341d4571b0/hrefPhil Schiller piloting an Apple-branded plane, iPod in hand, muttering “I’m getting too old for this stuff.” A perfect mix of Lethal Weapon camp and a winking message that yes, Classic-Apple was still at the controls — literally — flying its senior leadership straight toward Cupertino. Out the hatch, like high-altitude paratroopers of optimism, leapt the entire exec team, with Craig Federighi, always the go-to for Apple’s auto-ironic set pieces, leading the charge, donning a helmet literally resembling his own legendary mane. It was peak-bold, bizarre, and unmistakably Apple. That intro now reads like the final act of full-throttle confidence.This year’s WWDC offered a particularly crisp contrast. Aside from the new intro — which features Craig Federighi drifting an F1-style race car across the inner rooftop ring of Apple Park as a “therapy session”, a not-so-subtle nod to the upcoming Formula 1 blockbuster but also to the accountability for the failure to deliver the system-wide AI on time — WWDC 2025 pulled back dramatically. The new “Apple Intelligence” was introduced in a keynote with zero stumbles, zero awkward transitions, and visuals so pristine they could have been rendered on a Vision Pro. Not only had the scope of WWDC been trimmed down to safer talking points, but even the tone had shifted — less like a tech summit, more like a handsomely lit containment-mode seminar. And that, perhaps, was the problem. The presentation wasn’t a reveal — it was a performance. And performances can be edited in post. Demos can’t.So when Apple in march 2025 quietly admitted, for the first time, in a formal press release addressed to reporters like John Gruber, that the personalized Siri and system-wide AI features would be delayed — the reaction wasn’t outrage. It was something subtler: disillusionment. Gruber’s response cracked the façade wide open. His post opened a slow but persistent wave of unease, rippling through developer Slack channels and private comment threads alike. John Gruber’s reaction, published under the headline “Something is rotten in the State of Cupertino”, was devastating. His critique opened the floodgates to a wave of murmurs and public unease among developers and insiders, many of whom had begun to question what was really happening at the helm of key divisions central to Apple’s future.Many still believe Apple is the only company truly capable of pulling off hardware-software integrated AI at scale. But there’s a sense that the company is now operating in damage-control mode. The delay didn’t just push back a feature — it disrupted the entire strategic arc of WWDC 2025. What could have been a milestone in system-level AI became a cautious sidestep, repackaged through visual polish and feature tweaks. The result: a presentation focused on UI refinements and safe bets, far removed from the sweeping revolution that had been teased as the main selling point for promoting the iPhone 16 launch, “Built for Apple Intelligence”.That tension surfaced during Joanna Stern’s recent live interview with Craig Federighi and Greg Joswiak. These are two of Apple’s most media-savvy execs, and yet, in a setting where questions weren’t scripted, you could see the seams. Their usual fluency gave way to something stiffer. More careful. Less certain. And even the absences speak volumes: for the first time in a decade, no one from Apple’s top team joined John Gruber’s Talk Show at WWDC. It wasn’t a scheduling fluke — nor a petty retaliation for Gruber’s damning March article. It was a retreat — one that Stratechery’s Ben Thompson described as exactly that: a strategic fallback, not a brave reset.Meanwhile, the keynote narrative quietly shifted from AI ambition to UI innovation: new visual effects, tighter integration, call screening. Credit here goes to Alan Dye — Apple VP of Human Interface Design and one of the last remaining members of Jony Ive’s inner circle not yet absorbed into LoveFrom — whose long-arc work on interface aesthetics, from the early stages of the Dynamic Island onward, is finally starting to click into place. This is classic Apple: refinement as substance, design as coherence. But it was meant to be the cherry on top of a much deeper AI-system transformation — not the whole sundae. All useful. All safe. And yet, the thing that Apple could uniquely deliver — a seamless, deeply integrated, user-controlled and privacy-safe Apple Intelligence — is now the thing it seems most reluctant to show.There is no doubt the groundwork has been laid. And to Apple’s credit, Jason Snell notes that the company is shifting gears, scaling ambitions to something that feels more tangible. But in scaling back the risk, something else has been scaled back too: the willingness to look your audience of stakeholders, developers and users live, in the eye, and show the future for how you have carefully crafted it and how you can put it in the market immediately, or in mere weeks. Showing things as they are, or as they will be very soon. Rehearsed, yes, but never faked.Even James Dyson’s live demo of a new vacuum showed more courage. No camera cuts. No soft lighting. Just a human being, showing a thing. It might have sucked, literally or figuratively. But it didn’t. And it stuck. That’s what feels missing in Cupertino.Some have started using the term glasslighting — a coined pun blending Apple’s signature glassy aesthetics with the soft manipulations of marketing, like a gentle fog of polished perfection that leaves expectations quietly disoriented. It’s not deception. It’s damage control. But that instinct, understandable as it is, doesn’t build momentum. It builds inertia. And inertia doesn’t sell intelligence. It only delays the reckoning.Before the curtain falls, it’s hard not to revisit the uncanny polish of Apple’s speakers presence. One might start to wonder whether Apple is really late on AI — or whether it’s simply developed such a hyper-advanced internal model that its leadership team has been replaced by real-time human avatars, flawlessly animated, fed directly by the Neural Engine. Not the constrained humanity of two floating eyes behind an Apple Vision headset, but full-on flawless embodiment — if this is Apple’s augmented AI at work, it may be the only undisclosed and underpromised demo actually shipping.OS30 live demoMeanwhile, just as Apple was soft-pedaling its A.I. story with maximum visual polish, a very different tone landed from across the bay: Sam Altman and Jony Ive, sitting in a bar, talking about the future.https://medium.com/media/5cdea73d7fde0b538e038af1990afa44/hrefNo stage. No teleprompter. No uncanny valley. Just two “old friends”, with one hell of a budget, quietly sketching the next era of computing. A vision Apple once claimed effortlessly.There’s still the question of whether Apple, as many hope, can reclaim — and lock down — that leadership for itself. A healthy dose of competition, at the very least, can only help.Too big, fail too was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
    0 Yorumlar 0 hisse senetleri
  • Venice Biennale 2025 round-up: what else to see?

    This edition of the Venice Biennale includes 65 national pavilions, 11 collateral events, and over 750 participants in the international exhibition curated by Italian architect and engineer Carlo Ratti.
    Entitled Intelligens: Natural Artificial Collective, its stated aim is to make Venice a ‘living laboratory’. But Ratti’s exhibition in the Arsenale has been hit by mixed reviews. The AJ’s Rob Wilson described it as ‘a bit of a confusing mess’, while other media outlets have called the robot-heavy exhibit of future-facing building-focused solutions to the climate crisis a ‘tech-bro fever dream’ and a ‘mind-boggling rollercoaster’ to mention a few.
    It is a distinct shift away from the biennale of two years ago twhen Ghanaian-Scottish architect Lesley Lokko curated the main exhibitions, including 89 participants – of which more than half were from Africa or the African diaspora – in a convincing reset of the architectural conversation.Advertisement

    This year’s National Pavilions and collateral exhibits, by contrast, have tackled the largest themes in architecture and the world right now in a less constrained way than the main exhibitions. The exhibits are radical and work as a useful gauge for understanding what’s important in each country: decarbonisation, climate resilience, the reconstruction of Gaza, and an issue more prevalent in politics closer to home: gender wars.
    What's not to miss in the Giardini?
    British PavilionUK Pavilion
    The British Pavilion this year, which won a special mention from the Venetian jury, is housing a show by a British-Kenyan collab titled GBR – Geology of Britannic Repair. In it, the curators explore the links between colonialism, the built environment and geological extraction.
    Focusing on the Rift Valley, which runs from east Africa to the Middle East, including Palestine, the exhibition was curated by the Nairobi-based studio cave_bureau, UK-based curator, writer and Farrell Centre director Owen Hopkins and Queen Mary University professor Kathryn Yusoff.
    The pavilion’s façade is cloaked by a beaded veil of agricultural waste briquettes and clay and glass beads, produced in Kenya and India, echoing both Maasai practices and beads once made on Venice’s Murano, as currency for the exchange of metals, minerals and slaves.
    The pavilion’s six gallery spaces include multisensory installations such as the Earth Compass, a series of celestial maps connecting London and Nairobi; the Rift Room, tracing one of humans’ earliest migration routes; and the Shimoni Slave Cave, featuring a large-scale bronze cast of a valley cave historically used as a holding pen for enslaved people.Advertisement

    The show also includes Objects of Repair, a project by design-led research group Palestine Regeneration Team, looking at how salvaged materials could help rebuild war-torn Gaza, the only exhibit anywhere in the Biennale that tackled the reconstruction of Gaza face-on – doing so impressively, both politically and sensitively. here.
    Danish PavilionDemark Pavilion
    A firm favourite by most this year, the Danish exhibition Build of Site, curated by Søren Pihlmann of Pihlmann Architects, transforms the pavilion, which requires renovation anyway, into both a renovation site and archive of materials.
    Clever, simple and very methodical, the building is being both renewed while at the same time showcasing innovative methods to reuse surplus materials uncovered during the construction process – as an alternative to using new resources to build a temporary exhibition.
    The renovation of the 1950s Peter Koch-designed section of the pavilion began in December 2024 and will be completed following the biennale, having been suspended for its duration. On display are archetypal elements including podiums, ramps, benches and tables – all constructed from the surplus materials unearthed during the renovation, such as wood, limestone, concrete, stone, sand, silt and clay.
    Belgian PavilionBelgium Pavilion
    If you need a relaxing break from the intensity of the biennale, then the oldest national pavilion in the Giardini is the one for you. Belgium’s Building Biospheres: A New Alliance between Nature and Architecture brings ‘plant intelligence’ to the fore.
    Commissioned by the Flanders Architecture Institute and curated by landscape architect Bas Smets and neurobiologist Stefano Mancuso, the exhibit investigates how the natural ‘intelligence’ of plants can be used to produce an indoor climate – elevating the role of landscape design and calling for it to no longer serve as a backdrop for architecture.
    Inside, more than 200 plants occupy the central area beneath the skylight, becoming the pavilion’s centrepiece, with the rear space visualising ‘real-time’ data on the prototype’s climate control performance.
    Spanish PavilionSpain Pavilion
    One for the pure architecture lovers out there, models, installations, photographs and timber structures fill the Spanish Pavilion in abundance. Neatly curated by architects Roi Salgueiro Barrio and Manuel Bouzas Barcala, Internalities shows a series of existing and research projects that have contributed to decarbonising construction in Spain.
    The outcome? An extensive collection of work exploring the use of very local and very specific regenerative and low-carbon construction and materials – including stone, wood and soil. The joy of this pavilion comes from the 16 beautiful timber frames constructed from wood from communal forests in Galicia.
    Polish PavilionPoland Pavilion
    Poland’s pavilion was like Marmite this year. Some loved its playful approach while others found it silly. Lares and Penates, taking its name from ancient Roman deities of protection, has been curated by Aleksandra Kędziorek and looks at what it means and takes to have a sense of security in architecture.
    Speaking to many different anxieties, it refers to the unspoken assumption of treating architecture as a safe haven against the elements, catastrophes and wars – showcasing and elevating the mundane solutions and signage derived from building, fire and health regulations. The highlight? An ornate niche decorated with tiles and stones just for … a fire extinguisher.
    Dutch PavilionNetherlands Pavilion
    Punchy and straight to the point, SIDELINED: A Space to Rethink Togetherness takes sports as a lens for looking at how spatial design can both reveal and disrupt the often-exclusionary dynamics of everyday environments. Within the pavilion, the exhibit looks beyond the large-scale arena of the stadium and gymnasium to investigate the more localised and intimate context of the sports bar, as well as three alternative sports – a site of both social production and identity formation – as a metaphor for uniting diverse communities.
    The pavilion-turned-sports bar, designed by Koos Breen and Jeannette Slütter and inspired by Asger Jorn’s three-sided sports field, is a space for fluidity and experimentation where binary oppositions, social hierarchies and cultural values are contested and reshaped – complete with jerseys and football scarfsworn by players in the alternative Anonymous Allyship aligning the walls. Read Derin Fadina’s review for the AJ here.
    Performance inside the Nordic Countries PavilionNordic Countries Pavilion
    Probably the most impactful national pavilion this year, the Nordic Countries have presented an installation with performance work. Curated by Kaisa Karvinen, Industry Muscle: Five Scores for Architecture continues Finnish artist Teo Ala-Ruona’s work on trans embodiment and ecology by considering the trans body as a lens through which to examine modern architecture and the built environment.
    The three-day exhibition opening featured a two-hour performance each day with Ala-Ruona and his troupe crawling, climbing and writhing around the space, creating a bodily dialogue with the installations and pavilion building itself, which was designed by celebrated Modernist architect Sverre Fehn.
    The American pavilion next door, loudlyturns its back on what’s going on in its own country by just celebrating the apathetical porch, making the Nordic Countries seem even more relevant in this crucial time. Read Derin Fadina’s review for the AJ here.
    German PavilionGermany Pavilion
    An exhibit certainly grabbing the issue of climate change by its neck is the German contribution, Stresstest. Curated by Nicola Borgmann, Elisabeth Endres, Gabriele G Kiefer and Daniele Santucci, the pavilion has turned climate change into a literal physical and psychological experience for visitors by creating contrasting ‘stress’ and ‘de-stress’ rooms.
    In the dark stress room, a large metal sculpture creates a cramped and hot space using heating mats hung from the ceiling and powered by PVs. Opposite is a calmer space demonstrating strategies that could be used to reduce the heat of cities, and between the two spaces is a film focusing on the impacts of cities becoming hotter. If this doesn’t highlight the urgency of the situation, I’m not sure what will.
    Best bits of the Arsenale outside the main exhibitions
    Bahrain PavilionBahrain Pavilion
    Overall winner of this year’s Golden Lion for best national participation, Bahrain’s pavilion in the historic Artiglierie of the Arsenale is a proposal for living and working through heat conditions. Heatwave, curated by architect Andrea Faraguna, reimagines public space design by exploring passive cooling strategies rooted in the Arab country’s climate, as well as cultural context.
    A geothermal well and solar chimney are connected through a thermo-hygrometric axis that links underground conditions with the air outside. The inhabitable space that hosts visitors is thus compressed and defined by its earth-covered floor and suspended ceiling, and is surrounded by memorable sandbags, highlighting its scalability for particularly hot construction sites in the Gulf where a huge amount of construction is taking place.
    In the Arsenale’s exhibition space, where excavation wasn’t feasible, this system has been adapted into mechanical ventilation, bringing in air from the canal side and channelling it through ductwork to create a microclimate.
    Slovenian PavilionSlovenia Pavilion
    The AJ’s Rob Wilson’s top pavilion tip this year provides an enjoyable take on the theme of the main exhibition, highlighting how the tacit knowledge and on-site techniques and skills of construction workers and craftspeople are still the key constituent in architectural production despite all the heat and light about robotics, prefabrication, artificial intelligence and 3D printing.
    Master Builders, curated by Ana Kosi and Ognen Arsov and organised by the Museum of Architecture and Designin Ljubljana, presents a series of ‘totems’ –accumulative sculpture-like structures that are formed of conglomerations of differently worked materials, finishes and building elements. These are stacked up into crazy tower forms, which showcase various on-site construction skills and techniques, their construction documented in accompanying films.
    Uzbekistan PavilionUzbekistan Pavilion
    Uzbekistan’s contribution explores the Soviet era solar furnace and Modernist legacy. Architecture studio GRACE, led by curators Ekaterina Golovatyuk and Giacomo Cantoni have curated A Matter of Radiance. The focus is the Sun Institute of Material Science – originally known as the Sun Heliocomplex – an incredible large-scale scientific structure built in 1987 on a natural, seismic-free foundation near Tashkent and one of only two that study material behaviour under extreme temperatures. The exhibition examines the solar oven’s site’s historical and contemporary significance while reflecting on its scientific legacy and influence moving beyond just national borders.
    Applied Arts PavilionV&A Applied Arts Pavilion
    Diller Scofidio + Renfrois having a moment. The US-based practice, in collaboration with V&A chief curator Brendan Cormier, has curated On Storage, which aptly explores global storage architectures in a pavilion that strongly links to the V&A’s recent opening of Storehouse, its newcollections archive in east London.
    Featured is a six-channelfilm entitled Boxed: The Mild Boredom of Order, directed by the practice itself and following a toothbrush, as a metaphor for an everyday consumer product, on its journey through different forms of storage across the globe – from warehouse to distribution centre to baggage handlers down to the compact space of a suitcase.
    Also on display are large-format photographs of V&A East Storehouse, DS+R’s original architectural model and sketchbook and behind-the-scenes photography of Storehouse at work, taken by emerging east London-based photographers.
    Canal CaféCanal café
    Golden Lion for the best participation in the actual exhibition went to Canal Café, an intervention designed by V&A East Storehouse’s architect DS+R with Natural Systems Utilities, SODAI, Aaron Betsky and Davide Oldani.
    Serving up canal-water espresso, the installation is a demonstration of how Venice itself can be a laboratory to understand how to live on the water in a time of water scarcity. The structure, located on the edge of the Arsenale’s building complex, draws water from its lagoon before filtering it onsite via a hybrid of natural and artificial methods, including a mini wetland with grasses.
    The project was recognised for its persistence, having started almost 20 years ago, just showing how water scarcity, contamination and flooding are still major concerns both globally and, more locally, in the tourist-heavy city of Venice.
    And what else?
    Holy See PavilionThe Holy See
    Much like the Danish Pavilion, the Pavilion of the Holy See is also taking on an approach of renewal this year. Over the next six months, Opera Aperta will breathe new life into the Santa Maria Ausiliatrice Complex in the Castello district of Venice. Founded as a hospice for pilgrims in 1171, the building later became the oldest hospital and was converted into school in the 18th century. In 2001, the City of Venice allocated it for cultural use and for the next four years it will be managed by the Dicastery for Culture and Education of the Holy See to oversee its restoration.
    Curated by architect, curator and researcher Marina Otero Verzier and artistic director of Fondaco Italia, Giovanna Zabotti, the complex has been turned into a constant ‘living laboratory’ of collective repair – and received a special mention in the biennale awards.
    The restoration works, open from Tuesday to Friday, are being carried out by local artisans and specialised restorers with expertise in recovering stone, marble, terracotta, mural and canvas painting, stucco, wood and metal artworks.
    The beauty, however, lies in the photogenic fabrics, lit by a warm yellow glow, hanging from the walls within, gently wrapping the building’s surfaces, leaving openings that allow movement and offer glimpses of the ongoing restoration. Mobile scaffolding, used to support the works, also doubles up as furniture, providing space for equipment and subdividing the interior.
    Togo PavilionTogo Pavilion
    The Republic of Togo has presented its first pavilion ever at the biennale this year with the project Considering Togo’s Architectural Heritage, which sits intriguingly at the back of a second-hand furniture shop. The inaugural pavilion is curated by Lomé and Berlin-based Studio NEiDA and is in Venice’s Squero Castello.
    Exploring Togo’s architectural narratives from the early 20th century, and key ongoing restoration efforts, it documents key examples of the west African country’s heritage, highlighting both traditional and more modern building techniques – from Nôk cave dwellings to Afro-Brazilian architecture developed by freed slaves to post-independence Modernist buildings. Some buildings showcased are in disrepair, despite most of the modern structures remaining in use today, including Hotel de la Paix and the Bourse du Travail, suggestive of a future of repair and celebration.
    Estonian PavilionEstonia Pavilion
    Another firm favourite this year is the Estonian exhibition on Riva dei Sette Martiri on the waterfront between Corso Garibaldi and the Giardini.  The Guardian’s Olly Wainwright said that outside the Giardini, it packed ‘the most powerful punch of all.’
    Simple and effective, Let Me Warm You, curated by trio of architects Keiti Lige, Elina Liiva and Helena Männa, asks whether current insulation-driven renovations are merely a ‘checkbox’ to meet European energy targets or ‘a real chance’ to enhance the spatial and social quality of mass housing.
    The façade of the historic Venetian palazzetto in which it is housed is clad with fibre-cement insulation panels in the same process used in Estonia itself for its mass housing – a powerful visual statement showcasing a problematic disregard for the character and potential of typical habitable spaces. Inside, the ground floor is wrapped in plastic and exhibits how the dynamics between different stakeholders influence spatial solutions, including named stickers to encourage discussion among your peers.
    Venice ProcuratieSMACTimed to open to the public at the same time as the biennale, SMAC is a new permanent arts institution in Piazza San Marco, on the second floor of the Procuratie, which is owned by Generali. The exhibition space, open to the public for the first time in 500 years, comprises 16 galleries arranged along a continuous corridor stretching over 80m, recently restored by David Chipperfield Architects.
    Visitors can expect access through a private courtyard leading on to a monumental staircase and experience a typically sensitive Chipperfield restoration, which has revived the building’s original details: walls covered in a light grey Venetian marmorino made from crushed marble and floors of white terrazzo.
    During the summer, its inaugural programme features two solo exhibitions dedicated to Australian modern architect Harry Seidler and Korean landscape designer Jung Youngsun.
    Holcim's installationHolcim x Elemental
    Concrete manufacturer Holcim makes an appearance for a third time at Venice, this time partnering with Chilean Pritzker Prize-winning Alejandro Aravena’s practice Elemental – curator of the 2016 biennale – to launch a resilient housing prototype that follows on from the Norman Foster-designed Essential Homes Project.
    The ‘carbon-neutral’ structure incorporates Holcim’s range of low-carbon concrete ECOPact and is on display as part of the Time Space Existence exhibition organised by the European Cultural Centre in their gardens.
    It also applies Holcim’s ‘biochar’ technology for the first time, a concrete mix with 100 per cent recycled aggregates, in a full-scale Basic Services Unit. This follows an incremental design approach, which could entail fast and efficient construction via the provision of only essential housing components, and via self-build.
    The Next Earth at Palazzo DiedoThe Next Earth
    At Palazzo Diedo’s incredible dedicated Berggruen Arts and Culture space, MIT’s department of architecture and think tank Antikytherahave come together to create the exhibition The Next Earth: Computation, Crisis, Cosmology, which questions how philosophy and architecture must and can respond to various planet-wide crises.
    Antikythera’s The Noocene: Computation and Cosmology from Antikythera to AI looks at the evolution of ‘planetary computation’ as an ‘accidental’ megastructure through which systems, from the molecular to atmospheric scales, become both comprehensible and composable. What is actually on display is an architectural scale video monolith and short films on AI, astronomy and artificial life, as well as selected artefacts. MIT’s Climate Work: Un/Worlding the Planet features 37 works-in-progress, each looking at material supply chains, energy expenditure, modes of practice and deep-time perspectives. Take from it what you will.
    The 19th International Venice Architecture Biennale remains open until Sunday, 23 November 2025.
    #venice #biennale #roundup #what #else
    Venice Biennale 2025 round-up: what else to see?
    This edition of the Venice Biennale includes 65 national pavilions, 11 collateral events, and over 750 participants in the international exhibition curated by Italian architect and engineer Carlo Ratti. Entitled Intelligens: Natural Artificial Collective, its stated aim is to make Venice a ‘living laboratory’. But Ratti’s exhibition in the Arsenale has been hit by mixed reviews. The AJ’s Rob Wilson described it as ‘a bit of a confusing mess’, while other media outlets have called the robot-heavy exhibit of future-facing building-focused solutions to the climate crisis a ‘tech-bro fever dream’ and a ‘mind-boggling rollercoaster’ to mention a few. It is a distinct shift away from the biennale of two years ago twhen Ghanaian-Scottish architect Lesley Lokko curated the main exhibitions, including 89 participants – of which more than half were from Africa or the African diaspora – in a convincing reset of the architectural conversation.Advertisement This year’s National Pavilions and collateral exhibits, by contrast, have tackled the largest themes in architecture and the world right now in a less constrained way than the main exhibitions. The exhibits are radical and work as a useful gauge for understanding what’s important in each country: decarbonisation, climate resilience, the reconstruction of Gaza, and an issue more prevalent in politics closer to home: gender wars. What's not to miss in the Giardini? British PavilionUK Pavilion The British Pavilion this year, which won a special mention from the Venetian jury, is housing a show by a British-Kenyan collab titled GBR – Geology of Britannic Repair. In it, the curators explore the links between colonialism, the built environment and geological extraction. Focusing on the Rift Valley, which runs from east Africa to the Middle East, including Palestine, the exhibition was curated by the Nairobi-based studio cave_bureau, UK-based curator, writer and Farrell Centre director Owen Hopkins and Queen Mary University professor Kathryn Yusoff. The pavilion’s façade is cloaked by a beaded veil of agricultural waste briquettes and clay and glass beads, produced in Kenya and India, echoing both Maasai practices and beads once made on Venice’s Murano, as currency for the exchange of metals, minerals and slaves. The pavilion’s six gallery spaces include multisensory installations such as the Earth Compass, a series of celestial maps connecting London and Nairobi; the Rift Room, tracing one of humans’ earliest migration routes; and the Shimoni Slave Cave, featuring a large-scale bronze cast of a valley cave historically used as a holding pen for enslaved people.Advertisement The show also includes Objects of Repair, a project by design-led research group Palestine Regeneration Team, looking at how salvaged materials could help rebuild war-torn Gaza, the only exhibit anywhere in the Biennale that tackled the reconstruction of Gaza face-on – doing so impressively, both politically and sensitively. here. Danish PavilionDemark Pavilion A firm favourite by most this year, the Danish exhibition Build of Site, curated by Søren Pihlmann of Pihlmann Architects, transforms the pavilion, which requires renovation anyway, into both a renovation site and archive of materials. Clever, simple and very methodical, the building is being both renewed while at the same time showcasing innovative methods to reuse surplus materials uncovered during the construction process – as an alternative to using new resources to build a temporary exhibition. The renovation of the 1950s Peter Koch-designed section of the pavilion began in December 2024 and will be completed following the biennale, having been suspended for its duration. On display are archetypal elements including podiums, ramps, benches and tables – all constructed from the surplus materials unearthed during the renovation, such as wood, limestone, concrete, stone, sand, silt and clay. Belgian PavilionBelgium Pavilion If you need a relaxing break from the intensity of the biennale, then the oldest national pavilion in the Giardini is the one for you. Belgium’s Building Biospheres: A New Alliance between Nature and Architecture brings ‘plant intelligence’ to the fore. Commissioned by the Flanders Architecture Institute and curated by landscape architect Bas Smets and neurobiologist Stefano Mancuso, the exhibit investigates how the natural ‘intelligence’ of plants can be used to produce an indoor climate – elevating the role of landscape design and calling for it to no longer serve as a backdrop for architecture. Inside, more than 200 plants occupy the central area beneath the skylight, becoming the pavilion’s centrepiece, with the rear space visualising ‘real-time’ data on the prototype’s climate control performance. Spanish PavilionSpain Pavilion One for the pure architecture lovers out there, models, installations, photographs and timber structures fill the Spanish Pavilion in abundance. Neatly curated by architects Roi Salgueiro Barrio and Manuel Bouzas Barcala, Internalities shows a series of existing and research projects that have contributed to decarbonising construction in Spain. The outcome? An extensive collection of work exploring the use of very local and very specific regenerative and low-carbon construction and materials – including stone, wood and soil. The joy of this pavilion comes from the 16 beautiful timber frames constructed from wood from communal forests in Galicia. Polish PavilionPoland Pavilion Poland’s pavilion was like Marmite this year. Some loved its playful approach while others found it silly. Lares and Penates, taking its name from ancient Roman deities of protection, has been curated by Aleksandra Kędziorek and looks at what it means and takes to have a sense of security in architecture. Speaking to many different anxieties, it refers to the unspoken assumption of treating architecture as a safe haven against the elements, catastrophes and wars – showcasing and elevating the mundane solutions and signage derived from building, fire and health regulations. The highlight? An ornate niche decorated with tiles and stones just for … a fire extinguisher. Dutch PavilionNetherlands Pavilion Punchy and straight to the point, SIDELINED: A Space to Rethink Togetherness takes sports as a lens for looking at how spatial design can both reveal and disrupt the often-exclusionary dynamics of everyday environments. Within the pavilion, the exhibit looks beyond the large-scale arena of the stadium and gymnasium to investigate the more localised and intimate context of the sports bar, as well as three alternative sports – a site of both social production and identity formation – as a metaphor for uniting diverse communities. The pavilion-turned-sports bar, designed by Koos Breen and Jeannette Slütter and inspired by Asger Jorn’s three-sided sports field, is a space for fluidity and experimentation where binary oppositions, social hierarchies and cultural values are contested and reshaped – complete with jerseys and football scarfsworn by players in the alternative Anonymous Allyship aligning the walls. Read Derin Fadina’s review for the AJ here. Performance inside the Nordic Countries PavilionNordic Countries Pavilion Probably the most impactful national pavilion this year, the Nordic Countries have presented an installation with performance work. Curated by Kaisa Karvinen, Industry Muscle: Five Scores for Architecture continues Finnish artist Teo Ala-Ruona’s work on trans embodiment and ecology by considering the trans body as a lens through which to examine modern architecture and the built environment. The three-day exhibition opening featured a two-hour performance each day with Ala-Ruona and his troupe crawling, climbing and writhing around the space, creating a bodily dialogue with the installations and pavilion building itself, which was designed by celebrated Modernist architect Sverre Fehn. The American pavilion next door, loudlyturns its back on what’s going on in its own country by just celebrating the apathetical porch, making the Nordic Countries seem even more relevant in this crucial time. Read Derin Fadina’s review for the AJ here. German PavilionGermany Pavilion An exhibit certainly grabbing the issue of climate change by its neck is the German contribution, Stresstest. Curated by Nicola Borgmann, Elisabeth Endres, Gabriele G Kiefer and Daniele Santucci, the pavilion has turned climate change into a literal physical and psychological experience for visitors by creating contrasting ‘stress’ and ‘de-stress’ rooms. In the dark stress room, a large metal sculpture creates a cramped and hot space using heating mats hung from the ceiling and powered by PVs. Opposite is a calmer space demonstrating strategies that could be used to reduce the heat of cities, and between the two spaces is a film focusing on the impacts of cities becoming hotter. If this doesn’t highlight the urgency of the situation, I’m not sure what will. Best bits of the Arsenale outside the main exhibitions Bahrain PavilionBahrain Pavilion Overall winner of this year’s Golden Lion for best national participation, Bahrain’s pavilion in the historic Artiglierie of the Arsenale is a proposal for living and working through heat conditions. Heatwave, curated by architect Andrea Faraguna, reimagines public space design by exploring passive cooling strategies rooted in the Arab country’s climate, as well as cultural context. A geothermal well and solar chimney are connected through a thermo-hygrometric axis that links underground conditions with the air outside. The inhabitable space that hosts visitors is thus compressed and defined by its earth-covered floor and suspended ceiling, and is surrounded by memorable sandbags, highlighting its scalability for particularly hot construction sites in the Gulf where a huge amount of construction is taking place. In the Arsenale’s exhibition space, where excavation wasn’t feasible, this system has been adapted into mechanical ventilation, bringing in air from the canal side and channelling it through ductwork to create a microclimate. Slovenian PavilionSlovenia Pavilion The AJ’s Rob Wilson’s top pavilion tip this year provides an enjoyable take on the theme of the main exhibition, highlighting how the tacit knowledge and on-site techniques and skills of construction workers and craftspeople are still the key constituent in architectural production despite all the heat and light about robotics, prefabrication, artificial intelligence and 3D printing. Master Builders, curated by Ana Kosi and Ognen Arsov and organised by the Museum of Architecture and Designin Ljubljana, presents a series of ‘totems’ –accumulative sculpture-like structures that are formed of conglomerations of differently worked materials, finishes and building elements. These are stacked up into crazy tower forms, which showcase various on-site construction skills and techniques, their construction documented in accompanying films. Uzbekistan PavilionUzbekistan Pavilion Uzbekistan’s contribution explores the Soviet era solar furnace and Modernist legacy. Architecture studio GRACE, led by curators Ekaterina Golovatyuk and Giacomo Cantoni have curated A Matter of Radiance. The focus is the Sun Institute of Material Science – originally known as the Sun Heliocomplex – an incredible large-scale scientific structure built in 1987 on a natural, seismic-free foundation near Tashkent and one of only two that study material behaviour under extreme temperatures. The exhibition examines the solar oven’s site’s historical and contemporary significance while reflecting on its scientific legacy and influence moving beyond just national borders. Applied Arts PavilionV&A Applied Arts Pavilion Diller Scofidio + Renfrois having a moment. The US-based practice, in collaboration with V&A chief curator Brendan Cormier, has curated On Storage, which aptly explores global storage architectures in a pavilion that strongly links to the V&A’s recent opening of Storehouse, its newcollections archive in east London. Featured is a six-channelfilm entitled Boxed: The Mild Boredom of Order, directed by the practice itself and following a toothbrush, as a metaphor for an everyday consumer product, on its journey through different forms of storage across the globe – from warehouse to distribution centre to baggage handlers down to the compact space of a suitcase. Also on display are large-format photographs of V&A East Storehouse, DS+R’s original architectural model and sketchbook and behind-the-scenes photography of Storehouse at work, taken by emerging east London-based photographers. Canal CaféCanal café Golden Lion for the best participation in the actual exhibition went to Canal Café, an intervention designed by V&A East Storehouse’s architect DS+R with Natural Systems Utilities, SODAI, Aaron Betsky and Davide Oldani. Serving up canal-water espresso, the installation is a demonstration of how Venice itself can be a laboratory to understand how to live on the water in a time of water scarcity. The structure, located on the edge of the Arsenale’s building complex, draws water from its lagoon before filtering it onsite via a hybrid of natural and artificial methods, including a mini wetland with grasses. The project was recognised for its persistence, having started almost 20 years ago, just showing how water scarcity, contamination and flooding are still major concerns both globally and, more locally, in the tourist-heavy city of Venice. And what else? Holy See PavilionThe Holy See Much like the Danish Pavilion, the Pavilion of the Holy See is also taking on an approach of renewal this year. Over the next six months, Opera Aperta will breathe new life into the Santa Maria Ausiliatrice Complex in the Castello district of Venice. Founded as a hospice for pilgrims in 1171, the building later became the oldest hospital and was converted into school in the 18th century. In 2001, the City of Venice allocated it for cultural use and for the next four years it will be managed by the Dicastery for Culture and Education of the Holy See to oversee its restoration. Curated by architect, curator and researcher Marina Otero Verzier and artistic director of Fondaco Italia, Giovanna Zabotti, the complex has been turned into a constant ‘living laboratory’ of collective repair – and received a special mention in the biennale awards. The restoration works, open from Tuesday to Friday, are being carried out by local artisans and specialised restorers with expertise in recovering stone, marble, terracotta, mural and canvas painting, stucco, wood and metal artworks. The beauty, however, lies in the photogenic fabrics, lit by a warm yellow glow, hanging from the walls within, gently wrapping the building’s surfaces, leaving openings that allow movement and offer glimpses of the ongoing restoration. Mobile scaffolding, used to support the works, also doubles up as furniture, providing space for equipment and subdividing the interior. Togo PavilionTogo Pavilion The Republic of Togo has presented its first pavilion ever at the biennale this year with the project Considering Togo’s Architectural Heritage, which sits intriguingly at the back of a second-hand furniture shop. The inaugural pavilion is curated by Lomé and Berlin-based Studio NEiDA and is in Venice’s Squero Castello. Exploring Togo’s architectural narratives from the early 20th century, and key ongoing restoration efforts, it documents key examples of the west African country’s heritage, highlighting both traditional and more modern building techniques – from Nôk cave dwellings to Afro-Brazilian architecture developed by freed slaves to post-independence Modernist buildings. Some buildings showcased are in disrepair, despite most of the modern structures remaining in use today, including Hotel de la Paix and the Bourse du Travail, suggestive of a future of repair and celebration. Estonian PavilionEstonia Pavilion Another firm favourite this year is the Estonian exhibition on Riva dei Sette Martiri on the waterfront between Corso Garibaldi and the Giardini.  The Guardian’s Olly Wainwright said that outside the Giardini, it packed ‘the most powerful punch of all.’ Simple and effective, Let Me Warm You, curated by trio of architects Keiti Lige, Elina Liiva and Helena Männa, asks whether current insulation-driven renovations are merely a ‘checkbox’ to meet European energy targets or ‘a real chance’ to enhance the spatial and social quality of mass housing. The façade of the historic Venetian palazzetto in which it is housed is clad with fibre-cement insulation panels in the same process used in Estonia itself for its mass housing – a powerful visual statement showcasing a problematic disregard for the character and potential of typical habitable spaces. Inside, the ground floor is wrapped in plastic and exhibits how the dynamics between different stakeholders influence spatial solutions, including named stickers to encourage discussion among your peers. Venice ProcuratieSMACTimed to open to the public at the same time as the biennale, SMAC is a new permanent arts institution in Piazza San Marco, on the second floor of the Procuratie, which is owned by Generali. The exhibition space, open to the public for the first time in 500 years, comprises 16 galleries arranged along a continuous corridor stretching over 80m, recently restored by David Chipperfield Architects. Visitors can expect access through a private courtyard leading on to a monumental staircase and experience a typically sensitive Chipperfield restoration, which has revived the building’s original details: walls covered in a light grey Venetian marmorino made from crushed marble and floors of white terrazzo. During the summer, its inaugural programme features two solo exhibitions dedicated to Australian modern architect Harry Seidler and Korean landscape designer Jung Youngsun. Holcim's installationHolcim x Elemental Concrete manufacturer Holcim makes an appearance for a third time at Venice, this time partnering with Chilean Pritzker Prize-winning Alejandro Aravena’s practice Elemental – curator of the 2016 biennale – to launch a resilient housing prototype that follows on from the Norman Foster-designed Essential Homes Project. The ‘carbon-neutral’ structure incorporates Holcim’s range of low-carbon concrete ECOPact and is on display as part of the Time Space Existence exhibition organised by the European Cultural Centre in their gardens. It also applies Holcim’s ‘biochar’ technology for the first time, a concrete mix with 100 per cent recycled aggregates, in a full-scale Basic Services Unit. This follows an incremental design approach, which could entail fast and efficient construction via the provision of only essential housing components, and via self-build. The Next Earth at Palazzo DiedoThe Next Earth At Palazzo Diedo’s incredible dedicated Berggruen Arts and Culture space, MIT’s department of architecture and think tank Antikytherahave come together to create the exhibition The Next Earth: Computation, Crisis, Cosmology, which questions how philosophy and architecture must and can respond to various planet-wide crises. Antikythera’s The Noocene: Computation and Cosmology from Antikythera to AI looks at the evolution of ‘planetary computation’ as an ‘accidental’ megastructure through which systems, from the molecular to atmospheric scales, become both comprehensible and composable. What is actually on display is an architectural scale video monolith and short films on AI, astronomy and artificial life, as well as selected artefacts. MIT’s Climate Work: Un/Worlding the Planet features 37 works-in-progress, each looking at material supply chains, energy expenditure, modes of practice and deep-time perspectives. Take from it what you will. The 19th International Venice Architecture Biennale remains open until Sunday, 23 November 2025. #venice #biennale #roundup #what #else
    WWW.ARCHITECTSJOURNAL.CO.UK
    Venice Biennale 2025 round-up: what else to see?
    This edition of the Venice Biennale includes 65 national pavilions, 11 collateral events, and over 750 participants in the international exhibition curated by Italian architect and engineer Carlo Ratti. Entitled Intelligens: Natural Artificial Collective, its stated aim is to make Venice a ‘living laboratory’. But Ratti’s exhibition in the Arsenale has been hit by mixed reviews. The AJ’s Rob Wilson described it as ‘a bit of a confusing mess’, while other media outlets have called the robot-heavy exhibit of future-facing building-focused solutions to the climate crisis a ‘tech-bro fever dream’ and a ‘mind-boggling rollercoaster’ to mention a few. It is a distinct shift away from the biennale of two years ago twhen Ghanaian-Scottish architect Lesley Lokko curated the main exhibitions, including 89 participants – of which more than half were from Africa or the African diaspora – in a convincing reset of the architectural conversation.Advertisement This year’s National Pavilions and collateral exhibits, by contrast, have tackled the largest themes in architecture and the world right now in a less constrained way than the main exhibitions. The exhibits are radical and work as a useful gauge for understanding what’s important in each country: decarbonisation, climate resilience, the reconstruction of Gaza, and an issue more prevalent in politics closer to home: gender wars. What's not to miss in the Giardini? British Pavilion (photography: Chris Lane) UK Pavilion The British Pavilion this year, which won a special mention from the Venetian jury, is housing a show by a British-Kenyan collab titled GBR – Geology of Britannic Repair. In it, the curators explore the links between colonialism, the built environment and geological extraction. Focusing on the Rift Valley, which runs from east Africa to the Middle East, including Palestine, the exhibition was curated by the Nairobi-based studio cave_bureau, UK-based curator, writer and Farrell Centre director Owen Hopkins and Queen Mary University professor Kathryn Yusoff. The pavilion’s façade is cloaked by a beaded veil of agricultural waste briquettes and clay and glass beads, produced in Kenya and India, echoing both Maasai practices and beads once made on Venice’s Murano, as currency for the exchange of metals, minerals and slaves. The pavilion’s six gallery spaces include multisensory installations such as the Earth Compass, a series of celestial maps connecting London and Nairobi; the Rift Room, tracing one of humans’ earliest migration routes; and the Shimoni Slave Cave, featuring a large-scale bronze cast of a valley cave historically used as a holding pen for enslaved people.Advertisement The show also includes Objects of Repair, a project by design-led research group Palestine Regeneration Team (PART), looking at how salvaged materials could help rebuild war-torn Gaza, the only exhibit anywhere in the Biennale that tackled the reconstruction of Gaza face-on – doing so impressively, both politically and sensitively. Read more here. Danish Pavilion (photography: Hampus Berndtson) Demark Pavilion A firm favourite by most this year, the Danish exhibition Build of Site, curated by Søren Pihlmann of Pihlmann Architects, transforms the pavilion, which requires renovation anyway, into both a renovation site and archive of materials. Clever, simple and very methodical, the building is being both renewed while at the same time showcasing innovative methods to reuse surplus materials uncovered during the construction process – as an alternative to using new resources to build a temporary exhibition. The renovation of the 1950s Peter Koch-designed section of the pavilion began in December 2024 and will be completed following the biennale, having been suspended for its duration. On display are archetypal elements including podiums, ramps, benches and tables – all constructed from the surplus materials unearthed during the renovation, such as wood, limestone, concrete, stone, sand, silt and clay. Belgian Pavilion (photography: Michiel De Cleene) Belgium Pavilion If you need a relaxing break from the intensity of the biennale, then the oldest national pavilion in the Giardini is the one for you. Belgium’s Building Biospheres: A New Alliance between Nature and Architecture brings ‘plant intelligence’ to the fore. Commissioned by the Flanders Architecture Institute and curated by landscape architect Bas Smets and neurobiologist Stefano Mancuso, the exhibit investigates how the natural ‘intelligence’ of plants can be used to produce an indoor climate – elevating the role of landscape design and calling for it to no longer serve as a backdrop for architecture. Inside, more than 200 plants occupy the central area beneath the skylight, becoming the pavilion’s centrepiece, with the rear space visualising ‘real-time’ data on the prototype’s climate control performance. Spanish Pavilion (photography: Luca Capuano) Spain Pavilion One for the pure architecture lovers out there, models (32!), installations, photographs and timber structures fill the Spanish Pavilion in abundance. Neatly curated by architects Roi Salgueiro Barrio and Manuel Bouzas Barcala, Internalities shows a series of existing and research projects that have contributed to decarbonising construction in Spain. The outcome? An extensive collection of work exploring the use of very local and very specific regenerative and low-carbon construction and materials – including stone, wood and soil. The joy of this pavilion comes from the 16 beautiful timber frames constructed from wood from communal forests in Galicia. Polish Pavilion (photography: Luca Capuano) Poland Pavilion Poland’s pavilion was like Marmite this year. Some loved its playful approach while others found it silly. Lares and Penates, taking its name from ancient Roman deities of protection, has been curated by Aleksandra Kędziorek and looks at what it means and takes to have a sense of security in architecture. Speaking to many different anxieties, it refers to the unspoken assumption of treating architecture as a safe haven against the elements, catastrophes and wars – showcasing and elevating the mundane solutions and signage derived from building, fire and health regulations. The highlight? An ornate niche decorated with tiles and stones just for … a fire extinguisher. Dutch Pavilion (photography: Cristiano Corte) Netherlands Pavilion Punchy and straight to the point, SIDELINED: A Space to Rethink Togetherness takes sports as a lens for looking at how spatial design can both reveal and disrupt the often-exclusionary dynamics of everyday environments. Within the pavilion, the exhibit looks beyond the large-scale arena of the stadium and gymnasium to investigate the more localised and intimate context of the sports bar, as well as three alternative sports – a site of both social production and identity formation – as a metaphor for uniting diverse communities. The pavilion-turned-sports bar, designed by Koos Breen and Jeannette Slütter and inspired by Asger Jorn’s three-sided sports field, is a space for fluidity and experimentation where binary oppositions, social hierarchies and cultural values are contested and reshaped – complete with jerseys and football scarfs (currently a must-have fashion item) worn by players in the alternative Anonymous Allyship aligning the walls. Read Derin Fadina’s review for the AJ here. Performance inside the Nordic Countries Pavilion (photography: Venla Helenius) Nordic Countries Pavilion Probably the most impactful national pavilion this year (and with the best tote bag by far), the Nordic Countries have presented an installation with performance work. Curated by Kaisa Karvinen, Industry Muscle: Five Scores for Architecture continues Finnish artist Teo Ala-Ruona’s work on trans embodiment and ecology by considering the trans body as a lens through which to examine modern architecture and the built environment. The three-day exhibition opening featured a two-hour performance each day with Ala-Ruona and his troupe crawling, climbing and writhing around the space, creating a bodily dialogue with the installations and pavilion building itself, which was designed by celebrated Modernist architect Sverre Fehn. The American pavilion next door, loudly (country music!) turns its back on what’s going on in its own country by just celebrating the apathetical porch, making the Nordic Countries seem even more relevant in this crucial time. Read Derin Fadina’s review for the AJ here. German Pavilion (photography: Luca Capuano) Germany Pavilion An exhibit certainly grabbing the issue of climate change by its neck is the German contribution, Stresstest. Curated by Nicola Borgmann, Elisabeth Endres, Gabriele G Kiefer and Daniele Santucci, the pavilion has turned climate change into a literal physical and psychological experience for visitors by creating contrasting ‘stress’ and ‘de-stress’ rooms. In the dark stress room, a large metal sculpture creates a cramped and hot space using heating mats hung from the ceiling and powered by PVs. Opposite is a calmer space demonstrating strategies that could be used to reduce the heat of cities, and between the two spaces is a film focusing on the impacts of cities becoming hotter. If this doesn’t highlight the urgency of the situation, I’m not sure what will. Best bits of the Arsenale outside the main exhibitions Bahrain Pavilion (photography: Andrea Avezzù) Bahrain Pavilion Overall winner of this year’s Golden Lion for best national participation, Bahrain’s pavilion in the historic Artiglierie of the Arsenale is a proposal for living and working through heat conditions. Heatwave, curated by architect Andrea Faraguna, reimagines public space design by exploring passive cooling strategies rooted in the Arab country’s climate, as well as cultural context. A geothermal well and solar chimney are connected through a thermo-hygrometric axis that links underground conditions with the air outside. The inhabitable space that hosts visitors is thus compressed and defined by its earth-covered floor and suspended ceiling, and is surrounded by memorable sandbags, highlighting its scalability for particularly hot construction sites in the Gulf where a huge amount of construction is taking place. In the Arsenale’s exhibition space, where excavation wasn’t feasible, this system has been adapted into mechanical ventilation, bringing in air from the canal side and channelling it through ductwork to create a microclimate. Slovenian Pavilion (photography: Andrea Avezzù) Slovenia Pavilion The AJ’s Rob Wilson’s top pavilion tip this year provides an enjoyable take on the theme of the main exhibition, highlighting how the tacit knowledge and on-site techniques and skills of construction workers and craftspeople are still the key constituent in architectural production despite all the heat and light about robotics, prefabrication, artificial intelligence and 3D printing. Master Builders, curated by Ana Kosi and Ognen Arsov and organised by the Museum of Architecture and Design (MAO) in Ljubljana, presents a series of ‘totems’ –accumulative sculpture-like structures that are formed of conglomerations of differently worked materials, finishes and building elements. These are stacked up into crazy tower forms, which showcase various on-site construction skills and techniques, their construction documented in accompanying films. Uzbekistan Pavilion (photography: Luca Capuano) Uzbekistan Pavilion Uzbekistan’s contribution explores the Soviet era solar furnace and Modernist legacy. Architecture studio GRACE, led by curators Ekaterina Golovatyuk and Giacomo Cantoni have curated A Matter of Radiance. The focus is the Sun Institute of Material Science – originally known as the Sun Heliocomplex – an incredible large-scale scientific structure built in 1987 on a natural, seismic-free foundation near Tashkent and one of only two that study material behaviour under extreme temperatures. The exhibition examines the solar oven’s site’s historical and contemporary significance while reflecting on its scientific legacy and influence moving beyond just national borders. Applied Arts Pavilion (photography: Andrea Avezzù) V&A Applied Arts Pavilion Diller Scofidio + Renfro (DS+R) is having a moment. The US-based practice, in collaboration with V&A chief curator Brendan Cormier, has curated On Storage, which aptly explores global storage architectures in a pavilion that strongly links to the V&A’s recent opening of Storehouse, its new (and free) collections archive in east London. Featured is a six-channel (and screen) film entitled Boxed: The Mild Boredom of Order, directed by the practice itself and following a toothbrush, as a metaphor for an everyday consumer product, on its journey through different forms of storage across the globe – from warehouse to distribution centre to baggage handlers down to the compact space of a suitcase. Also on display are large-format photographs of V&A East Storehouse, DS+R’s original architectural model and sketchbook and behind-the-scenes photography of Storehouse at work, taken by emerging east London-based photographers. Canal Café (photography: Marco Zorzanello) Canal café Golden Lion for the best participation in the actual exhibition went to Canal Café, an intervention designed by V&A East Storehouse’s architect DS+R with Natural Systems Utilities, SODAI, Aaron Betsky and Davide Oldani. Serving up canal-water espresso, the installation is a demonstration of how Venice itself can be a laboratory to understand how to live on the water in a time of water scarcity. The structure, located on the edge of the Arsenale’s building complex, draws water from its lagoon before filtering it onsite via a hybrid of natural and artificial methods, including a mini wetland with grasses. The project was recognised for its persistence, having started almost 20 years ago, just showing how water scarcity, contamination and flooding are still major concerns both globally and, more locally, in the tourist-heavy city of Venice. And what else? Holy See Pavilion (photography: Andrea Avezzù) The Holy See Much like the Danish Pavilion, the Pavilion of the Holy See is also taking on an approach of renewal this year. Over the next six months, Opera Aperta will breathe new life into the Santa Maria Ausiliatrice Complex in the Castello district of Venice. Founded as a hospice for pilgrims in 1171, the building later became the oldest hospital and was converted into school in the 18th century. In 2001, the City of Venice allocated it for cultural use and for the next four years it will be managed by the Dicastery for Culture and Education of the Holy See to oversee its restoration. Curated by architect, curator and researcher Marina Otero Verzier and artistic director of Fondaco Italia, Giovanna Zabotti, the complex has been turned into a constant ‘living laboratory’ of collective repair – and received a special mention in the biennale awards. The restoration works, open from Tuesday to Friday, are being carried out by local artisans and specialised restorers with expertise in recovering stone, marble, terracotta, mural and canvas painting, stucco, wood and metal artworks. The beauty, however, lies in the photogenic fabrics, lit by a warm yellow glow, hanging from the walls within, gently wrapping the building’s surfaces, leaving openings that allow movement and offer glimpses of the ongoing restoration. Mobile scaffolding, used to support the works, also doubles up as furniture, providing space for equipment and subdividing the interior. Togo Pavilion (photography: Andrea Avezzù) Togo Pavilion The Republic of Togo has presented its first pavilion ever at the biennale this year with the project Considering Togo’s Architectural Heritage, which sits intriguingly at the back of a second-hand furniture shop. The inaugural pavilion is curated by Lomé and Berlin-based Studio NEiDA and is in Venice’s Squero Castello. Exploring Togo’s architectural narratives from the early 20th century, and key ongoing restoration efforts, it documents key examples of the west African country’s heritage, highlighting both traditional and more modern building techniques – from Nôk cave dwellings to Afro-Brazilian architecture developed by freed slaves to post-independence Modernist buildings. Some buildings showcased are in disrepair, despite most of the modern structures remaining in use today, including Hotel de la Paix and the Bourse du Travail, suggestive of a future of repair and celebration. Estonian Pavilion (photography: Joosep Kivimäe) Estonia Pavilion Another firm favourite this year is the Estonian exhibition on Riva dei Sette Martiri on the waterfront between Corso Garibaldi and the Giardini.  The Guardian’s Olly Wainwright said that outside the Giardini, it packed ‘the most powerful punch of all.’ Simple and effective, Let Me Warm You, curated by trio of architects Keiti Lige, Elina Liiva and Helena Männa, asks whether current insulation-driven renovations are merely a ‘checkbox’ to meet European energy targets or ‘a real chance’ to enhance the spatial and social quality of mass housing. The façade of the historic Venetian palazzetto in which it is housed is clad with fibre-cement insulation panels in the same process used in Estonia itself for its mass housing – a powerful visual statement showcasing a problematic disregard for the character and potential of typical habitable spaces. Inside, the ground floor is wrapped in plastic and exhibits how the dynamics between different stakeholders influence spatial solutions, including named stickers to encourage discussion among your peers. Venice Procuratie (photography: Mike Merkenschlager) SMAC (San Marco Art Centre) Timed to open to the public at the same time as the biennale, SMAC is a new permanent arts institution in Piazza San Marco, on the second floor of the Procuratie, which is owned by Generali. The exhibition space, open to the public for the first time in 500 years, comprises 16 galleries arranged along a continuous corridor stretching over 80m, recently restored by David Chipperfield Architects. Visitors can expect access through a private courtyard leading on to a monumental staircase and experience a typically sensitive Chipperfield restoration, which has revived the building’s original details: walls covered in a light grey Venetian marmorino made from crushed marble and floors of white terrazzo. During the summer, its inaugural programme features two solo exhibitions dedicated to Australian modern architect Harry Seidler and Korean landscape designer Jung Youngsun. Holcim's installation (photography: Celestia Studio) Holcim x Elemental Concrete manufacturer Holcim makes an appearance for a third time at Venice, this time partnering with Chilean Pritzker Prize-winning Alejandro Aravena’s practice Elemental – curator of the 2016 biennale – to launch a resilient housing prototype that follows on from the Norman Foster-designed Essential Homes Project. The ‘carbon-neutral’ structure incorporates Holcim’s range of low-carbon concrete ECOPact and is on display as part of the Time Space Existence exhibition organised by the European Cultural Centre in their gardens. It also applies Holcim’s ‘biochar’ technology for the first time, a concrete mix with 100 per cent recycled aggregates, in a full-scale Basic Services Unit. This follows an incremental design approach, which could entail fast and efficient construction via the provision of only essential housing components, and via self-build. The Next Earth at Palazzo Diedo (photography: Joan Porcel) The Next Earth At Palazzo Diedo’s incredible dedicated Berggruen Arts and Culture space, MIT’s department of architecture and think tank Antikythera (apparently taking its name from the first-known computer) have come together to create the exhibition The Next Earth: Computation, Crisis, Cosmology, which questions how philosophy and architecture must and can respond to various planet-wide crises. Antikythera’s The Noocene: Computation and Cosmology from Antikythera to AI looks at the evolution of ‘planetary computation’ as an ‘accidental’ megastructure through which systems, from the molecular to atmospheric scales, become both comprehensible and composable. What is actually on display is an architectural scale video monolith and short films on AI, astronomy and artificial life, as well as selected artefacts. MIT’s Climate Work: Un/Worlding the Planet features 37 works-in-progress, each looking at material supply chains, energy expenditure, modes of practice and deep-time perspectives. Take from it what you will. The 19th International Venice Architecture Biennale remains open until Sunday, 23 November 2025.
    Like
    Love
    Wow
    Sad
    Angry
    632
    0 Yorumlar 0 hisse senetleri
  • Big government is still good, even with Trump in power

    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau, a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap whereare all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people. And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More:
    #big #government #still #good #even
    Big government is still good, even with Trump in power
    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau, a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap whereare all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people. And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More: #big #government #still #good #even
    WWW.VOX.COM
    Big government is still good, even with Trump in power
    It’s easy to look at President Donald Trump’s second term and conclude that the less power and reach the federal government has, the better. After all, a smaller government might provide Trump or someone like him with fewer opportunities to disrupt people’s lives, leaving America less vulnerable to the whims of an aspiring autocrat. Weaker law-enforcement agencies could lack the capacity to enforce draconian policies. The president would have less say in how universities like Columbia conduct their business if they weren’t so dependent on federal funding. And he would have fewer resources to fundamentally change the American way of life.Trump’s presidency has the potential to reshape an age-old debate between the left and the right: Is it better to have a big government or a small one? The left, which has long advocated for bigger government as a solution to society’s problems, might be inclined to think that in the age of Trump, a strong government may be too risky. Say the United States had a single-payer universal health care system, for example. As my colleague Kelsey Piper pointed out, the government would have a lot of power to decide what sorts of medical treatments should and shouldn’t be covered, and certain forms of care that the right doesn’t support — like abortion or transgender health — would likely get cut when they’re in power. That’s certainly a valid concern. But the dangers Trump poses do not ultimately make the case for a small or weak government because the principal problem with the Trump presidency is not that he or the federal government has too much power. It’s that there’s not enough oversight.Reducing the power of the government wouldn’t necessarily protect us. In fact, “making government smaller” is one of the ways that Trump might be consolidating power.First things first: What is “big government”?When Americans are polled about how they feel about “big government” programs — policies like universal health care, Social Security, welfare for the poor — the majority of people tend to support them. Nearly two-thirds of Americans believe the government should be responsible for ensuring everyone has health coverage. But when you ask Americans whether they support “big government” in the abstract, a solid majority say they view it as a threat.That might sound like a story of contradictions. But it also makes sense because “big government” can have many different meanings. It can be a police state that surveils its citizens, an expansive regulatory state that establishes and enforces rules for the private sector, a social welfare state that directly provides a decent standard of living for everyone, or some combination of the three. In the United States, the debate over “big government” can also include arguments about federalism, or how much power the federal government should have over states. All these distinctions complicate the debate over the size of government: Because while someone might support a robust welfare system, they might simultaneously be opposed to being governed by a surveillance state or having the federal government involved in state and local affairs.As much as Americans like to fantasize about small government, the reality is that the wealthiest economies in the world have all been a product of big government, and the United States is no exception. That form of government includes providing a baseline social safety net, funding basic services, and regulating commerce. It also includes a government that has the capacity to enforce its rules and regulations.A robust state that caters to the needs of its people, that is able to respond quickly in times of crisis, is essential. Take the Covid-19 pandemic. The US government, under both the Trump and Biden administrations, was able to inject trillions of dollars into the economy to avert a sustained economic downturn. As a result, people were able to withstand the economic shocks, and poverty actually declined. Stripping the state of the basic powers it needs to improve the lives of its citizens will only make it less effective and erode people’s faith in it as a central institution, making people less likely to participate in the democratic process, comply with government policies, or even accept election outcomes.A constrained government does not mean a small governmentBut what happens when the people in power have no respect for democracy? The argument for a weaker and smaller government often suggests that a smaller government would be more constrained in the harm it can cause, while big government is more unrestrained. In this case, the argument is that if the US had a smaller government, then Trump could not effectively use the power of the state — by, say, deploying federal law enforcement agencies or withholding federal funds — to deport thousands of immigrants, bully universities, and assault fundamental rights like the freedom of speech. But advocating for bigger government does not mean you believe in handing the state unlimited power to do as it pleases. Ultimately, the most important way to constrain government has less to do with its size and scope and more to do with its checks and balances. In fact, one of the biggest checks on Trump’s power so far has been the structure of the US government, not its size. Trump’s most dangerous examples of overreach — his attempts to conduct mass deportations, eliminate birthright citizenship, and revoke student visas and green cards based on political views — have been an example of how proper oversight has the potential to limit government overreach. To be sure, Trump’s policies have already upended people’s lives, chilled speech, and undermined the principle of due process. But while Trump has pushed through some of his agenda, he hasn’t been able to deliver at the scale he promised. But that’s not because the federal government lacks the capacity to do those things. It’s because we have three equal branches of government, and the judicial branch, for all of its shortcomings in the Trump era, is still doing its most basic job to keep the executive branch in check. Reforms should include more oversight, not shrinking governmentThe biggest lesson from Trump’s first term was that America’s system of checks and balances — rules and regulations, norms, and the separate branches of government — wasn’t strong enough. As it turned out, a lot of potential oversight mechanisms did not have enough teeth to meaningfully restrain the president from abusing his power. Trump incited an assault on the US Capitol in an effort to overturn the 2020 election, and Congress ultimately failed in its duty to convict him for his actions. Twice, impeachment was shown to be a useless tool to keep a president in check.But again that’s a problem of oversight, not of the size and power of government. Still, oversight mechanisms need to be baked into big government programs to insulate them from petty politics or volatile changes from one administration to the next. Take the example of the hypothetical single-payer universal health care system. Laws dictating which treatments should be covered should be designed to ensure that changes to them aren’t dictated by the president alone, but through some degree of consensus that involves regulatory boards, Congress, and the courts. Ultimately, social programs should have mechanisms that allow for change so that laws don’t become outdated, as they do now. And while it’s impossible to guarantee that those changes will always be good, the current system of employer-sponsored health insurance is hardly a stable alternative.By contrast, shrinking government in the way that Republicans often talk about only makes people more vulnerable. Bigger governments — and more bureaucracy — can also insulate public institutions from the whims of an erratic president. For instance, Trump has tried to shutter the Consumer Financial Protection Bureau (CFPB), a regulatory agency that gets in the way of his and his allies’ business. This assault allows Trump to serve his own interests by pleasing his donors.In other words, Trump is currently trying to make government smaller — by shrinking or eliminating agencies that get in his way — to consolidate power. “Despite Donald Trump’s rhetoric about the size or inefficiency of government, what he has done is eradicate agencies that directly served people,” said Julie Margetta Morgan, president of the Century Foundation who served as an associate director at the CFPB. “He may use the language of ‘government inefficiency’ to accomplish his goals, but I think what we’re seeing is that the goals are in fact to open up more lanes for big businesses to run roughshod over the American people.” The problem for small-government advocates is that the alternative to big government is not just small government. It’s also big business because fewer services, rules, and regulations open up the door to privatization and monopolization. And while the government, however big, has to answer to the public, businesses are far less accountable. One example of how business can replace government programs is the Republicans’ effort to overhaul student loan programs in the latest reconciliation bill the House passed, which includes eliminating subsidized loans and limiting the amount of aid students receive. The idea is that if students can’t get enough federal loans to cover the cost of school, they’ll turn to private lenders instead. “It’s not only cutting Pell Grants and the affordability of student loan programs in order to fund tax cuts to the wealthy, but it’s also creating a gap where [private lenders] are all too happy to come in,” Margetta Morgan said. “This is the small government alternative: It’s cutting back on programs that provided direct services for people — that made their lives better and more affordable — and replacing it with companies that will use that gap as an opportunity for extraction and, in some cases, for predatory services.”Even with flawed oversight, a bigger and more powerful government is still preferable because it can address people’s most basic needs, whereas small government and the privatization of public services often lead to worse outcomes.So while small government might sound like a nice alternative when would-be tyrants rise to power, the alternative to big government would only be more corrosive to democracy, consolidating power in the hands of even fewer people (and businesses). And ultimately, there’s one big way for Trump to succeed at destroying democracy, and that’s not by expanding government but by eliminating the parts of government that get in his way.See More:
    Like
    Love
    Wow
    Angry
    Sad
    257
    0 Yorumlar 0 hisse senetleri
  • Europe threatens Apple with additional fines

    The European Commission has published its full Digital Markets Actdecision against Apple, and it’s far, far worse than anybody expected. The Commission, the executive arm of the European Union, has accepted absolutely none of Apple’s arguments against being fined, and the decision threatens yet more existential damage to the company.

    Apple isn’t winning the argument, and, right or wrong, the decision has fangs.

    Huge fines, big threats

    Europe announced in April that it would fine Apple an eye-popping €500 million for noncompliance with the DMA, giving Apple 60 days to comply with its decision. One month later, the Commission published the full ruling against Apple, which details that changes the company made to its App Store rules did not go far enough to bring it into compliance.

    The decision warns that Apple is subject to additional periodic fines in the future if it fails to comply with the Commission’s strict interpretation of the DMA, no matter how inherently punitive some of its demands may be. We’ll know soon enough if there are to be wider consequences to Europe’s demands. Apple now has 30 days to fully comply with the DMAor face additional fines.

    The act itself came into force in November 2022 and began to be implemented against companies defined as ‘gatekeepers’ in 2023. The intention is to stop Apple and others from using their market position to impose anticompetitive limitations on developers. 

    Who is steering?

    The big bugbear relates to Apple’s anti-steering restrictions, which prevent developers from telling customers they can purchase services outside the App Store. The DMA demands that Apple let developers offer this option, which Apple does, but Europe argues that the limitations the company makes on doing so are not in compliance with the law.

    Europe also says Apple’s existing restrictions, fees, and technical limitations undermine the effectiveness of the DMA. That seems to mean Apple cannot charge a commission and cannot warn users of the consequences they face when shopping outside the App Store. 

    The Commission even plays dumb to the potential significance of permitting developers to link out to any website from within their apps, rather than being constrained to approvedsites. It says Apple has provided insufficient justification for this restriction and also wants Apple to remove messages warning users when they are about to make a transaction outside the App Store. 

    That’s going to be particularly pleasing to fraudsters, who may now attempt to create fake payment portals that look like reputable ones. Apple prevented billion in fraud last year, the company has confirmed. Perhaps once the first big frauds take place, the EU may catch up to the online risks we all know exist.

    While I understand the original aim of Europe’s Digital Markets Act, the demands the Commission is making of Apple appear to go far beyond the original objective, which was to open up Apple’s platforms to competition. 

    The decisions now open Apple’s platform up to competitors. 

    There is a difference between the two, and, as described, it means Apple must now create and manage its platforms while permitting competitors to profit from those platforms at little or no cost.

    Apple rejects Europe

    Apple will fight in Europe. 

    “There is nothing in the 70-page decision released today that justifies the European Commission’s targeted actions against Apple, which threaten the privacy and security of our users in Europe and force us to give away our technology for free,” the company said. “Their decision and unprecedented fine came after the Commission continuously moved the goalposts on compliance, and repeatedly blocked Apple’s months-long efforts to implement a new solution. The decision is bad for innovation, bad for competition, bad for our products, and bad for users. While we appeal, we’ll continue engaging with the Commission to advocate on behalf of our European customers.”

    When the fine was initially revealed, the company also said: 

    “Today’s announcements are yet another example of the European Commission unfairly targeting Apple in a series of decisions that are bad for the privacy and security of our users, bad for products, and force us to give away our technology for free. We have spent hundreds of thousands of engineering hours and made dozens of changes to comply with this law, none of which our users have asked for. Despite countless meetings, the Commission continues to move the goal posts every step of the way.”

    My take? 

    Far from saving Europe’s tech industry, the manner in which the DMA is being applied will make the region even less relevant. Lacking a significant platform of its own, Europe’s approach will reduce choice and increase insecurity.

    As the clear first target of the DMA, Apple will inevitably be forced to increase prices, charge developers more for access to its developer tools, and will I think simply stop selling some products and services in Europe, rather than threaten customer security. We know it can do this because it has done so before.

    Fundamentally, of course, the big question remains unaddressed: How much profit it is legitimate to make on any product or service? I imagine the European Commission doesn’t want to go near a question as fundamental to capitalist wealth extraction as that. Can you imagine the collapse in executive bonuses that would follow a decision to define what the maximum profit made in any business transaction should be?

    Lobbyists across the political spectrum would be appalled — that extra profit pays for their meals. Looking to the extent to which the current application of the DMA seems to favor Apple’s biggest competitors, I can’t help but imagine it’s been paying for a few European meals already. Nice work, if you can get it. 

    You can follow me on social media! Join me on BlueSky,  LinkedIn, Mastodon, and MeWe.
    #europe #threatens #apple #with #additional
    Europe threatens Apple with additional fines
    The European Commission has published its full Digital Markets Actdecision against Apple, and it’s far, far worse than anybody expected. The Commission, the executive arm of the European Union, has accepted absolutely none of Apple’s arguments against being fined, and the decision threatens yet more existential damage to the company. Apple isn’t winning the argument, and, right or wrong, the decision has fangs. Huge fines, big threats Europe announced in April that it would fine Apple an eye-popping €500 million for noncompliance with the DMA, giving Apple 60 days to comply with its decision. One month later, the Commission published the full ruling against Apple, which details that changes the company made to its App Store rules did not go far enough to bring it into compliance. The decision warns that Apple is subject to additional periodic fines in the future if it fails to comply with the Commission’s strict interpretation of the DMA, no matter how inherently punitive some of its demands may be. We’ll know soon enough if there are to be wider consequences to Europe’s demands. Apple now has 30 days to fully comply with the DMAor face additional fines. The act itself came into force in November 2022 and began to be implemented against companies defined as ‘gatekeepers’ in 2023. The intention is to stop Apple and others from using their market position to impose anticompetitive limitations on developers.  Who is steering? The big bugbear relates to Apple’s anti-steering restrictions, which prevent developers from telling customers they can purchase services outside the App Store. The DMA demands that Apple let developers offer this option, which Apple does, but Europe argues that the limitations the company makes on doing so are not in compliance with the law. Europe also says Apple’s existing restrictions, fees, and technical limitations undermine the effectiveness of the DMA. That seems to mean Apple cannot charge a commission and cannot warn users of the consequences they face when shopping outside the App Store.  The Commission even plays dumb to the potential significance of permitting developers to link out to any website from within their apps, rather than being constrained to approvedsites. It says Apple has provided insufficient justification for this restriction and also wants Apple to remove messages warning users when they are about to make a transaction outside the App Store.  That’s going to be particularly pleasing to fraudsters, who may now attempt to create fake payment portals that look like reputable ones. Apple prevented billion in fraud last year, the company has confirmed. Perhaps once the first big frauds take place, the EU may catch up to the online risks we all know exist. While I understand the original aim of Europe’s Digital Markets Act, the demands the Commission is making of Apple appear to go far beyond the original objective, which was to open up Apple’s platforms to competition.  The decisions now open Apple’s platform up to competitors.  There is a difference between the two, and, as described, it means Apple must now create and manage its platforms while permitting competitors to profit from those platforms at little or no cost. Apple rejects Europe Apple will fight in Europe.  “There is nothing in the 70-page decision released today that justifies the European Commission’s targeted actions against Apple, which threaten the privacy and security of our users in Europe and force us to give away our technology for free,” the company said. “Their decision and unprecedented fine came after the Commission continuously moved the goalposts on compliance, and repeatedly blocked Apple’s months-long efforts to implement a new solution. The decision is bad for innovation, bad for competition, bad for our products, and bad for users. While we appeal, we’ll continue engaging with the Commission to advocate on behalf of our European customers.” When the fine was initially revealed, the company also said:  “Today’s announcements are yet another example of the European Commission unfairly targeting Apple in a series of decisions that are bad for the privacy and security of our users, bad for products, and force us to give away our technology for free. We have spent hundreds of thousands of engineering hours and made dozens of changes to comply with this law, none of which our users have asked for. Despite countless meetings, the Commission continues to move the goal posts every step of the way.” My take?  Far from saving Europe’s tech industry, the manner in which the DMA is being applied will make the region even less relevant. Lacking a significant platform of its own, Europe’s approach will reduce choice and increase insecurity. As the clear first target of the DMA, Apple will inevitably be forced to increase prices, charge developers more for access to its developer tools, and will I think simply stop selling some products and services in Europe, rather than threaten customer security. We know it can do this because it has done so before. Fundamentally, of course, the big question remains unaddressed: How much profit it is legitimate to make on any product or service? I imagine the European Commission doesn’t want to go near a question as fundamental to capitalist wealth extraction as that. Can you imagine the collapse in executive bonuses that would follow a decision to define what the maximum profit made in any business transaction should be? Lobbyists across the political spectrum would be appalled — that extra profit pays for their meals. Looking to the extent to which the current application of the DMA seems to favor Apple’s biggest competitors, I can’t help but imagine it’s been paying for a few European meals already. Nice work, if you can get it.  You can follow me on social media! Join me on BlueSky,  LinkedIn, Mastodon, and MeWe. #europe #threatens #apple #with #additional
    WWW.COMPUTERWORLD.COM
    Europe threatens Apple with additional fines
    The European Commission has published its full Digital Markets Act (DMA) decision against Apple, and it’s far, far worse than anybody expected. The Commission, the executive arm of the European Union, has accepted absolutely none of Apple’s arguments against being fined, and the decision threatens yet more existential damage to the company. Apple isn’t winning the argument, and, right or wrong, the decision has fangs. Huge fines, big threats Europe announced in April that it would fine Apple an eye-popping €500 million for noncompliance with the DMA, giving Apple 60 days to comply with its decision. One month later, the Commission published the full ruling against Apple, which details that changes the company made to its App Store rules did not go far enough to bring it into compliance. The decision warns that Apple is subject to additional periodic fines in the future if it fails to comply with the Commission’s strict interpretation of the DMA, no matter how inherently punitive some of its demands may be. (Can anyone else spell “tariffs”?) We’ll know soon enough if there are to be wider consequences to Europe’s demands. Apple now has 30 days to fully comply with the DMA (in Europe’s opinion) or face additional fines. The act itself came into force in November 2022 and began to be implemented against companies defined as ‘gatekeepers’ in 2023. The intention is to stop Apple and others from using their market position to impose anticompetitive limitations on developers.  Who is steering? The big bugbear relates to Apple’s anti-steering restrictions, which prevent developers from telling customers they can purchase services outside the App Store. The DMA demands that Apple let developers offer this option, which Apple does, but Europe argues that the limitations the company makes on doing so are not in compliance with the law. Europe also says Apple’s existing restrictions, fees, and technical limitations undermine the effectiveness of the DMA. That seems to mean Apple cannot charge a commission and cannot warn users of the consequences they face when shopping outside the App Store.  The Commission even plays dumb to the potential significance of permitting developers to link out to any website from within their apps, rather than being constrained to approved (and secure) sites. It says Apple has provided insufficient justification for this restriction and also wants Apple to remove messages warning users when they are about to make a transaction outside the App Store.  That’s going to be particularly pleasing to fraudsters, who may now attempt to create fake payment portals that look like reputable ones. Apple prevented $2 billion in fraud last year, the company has confirmed. Perhaps once the first big frauds take place, the EU may catch up to the online risks we all know exist. While I understand the original aim of Europe’s Digital Markets Act, the demands the Commission is making of Apple appear to go far beyond the original objective, which was to open up Apple’s platforms to competition.  The decisions now open Apple’s platform up to competitors.  There is a difference between the two, and, as described, it means Apple must now create and manage its platforms while permitting competitors to profit from those platforms at little or no cost. Apple rejects Europe Apple will fight in Europe.  “There is nothing in the 70-page decision released today that justifies the European Commission’s targeted actions against Apple, which threaten the privacy and security of our users in Europe and force us to give away our technology for free,” the company said. “Their decision and unprecedented fine came after the Commission continuously moved the goalposts on compliance, and repeatedly blocked Apple’s months-long efforts to implement a new solution. The decision is bad for innovation, bad for competition, bad for our products, and bad for users. While we appeal, we’ll continue engaging with the Commission to advocate on behalf of our European customers.” When the fine was initially revealed, the company also said:  “Today’s announcements are yet another example of the European Commission unfairly targeting Apple in a series of decisions that are bad for the privacy and security of our users, bad for products, and force us to give away our technology for free. We have spent hundreds of thousands of engineering hours and made dozens of changes to comply with this law, none of which our users have asked for. Despite countless meetings, the Commission continues to move the goal posts every step of the way.” My take?  Far from saving Europe’s tech industry, the manner in which the DMA is being applied will make the region even less relevant. Lacking a significant platform of its own, Europe’s approach will reduce choice and increase insecurity. As the clear first target of the DMA, Apple will inevitably be forced to increase prices, charge developers more for access to its developer tools, and will I think simply stop selling some products and services in Europe, rather than threaten customer security. We know it can do this because it has done so before. Fundamentally, of course, the big question remains unaddressed: How much profit it is legitimate to make on any product or service? I imagine the European Commission doesn’t want to go near a question as fundamental to capitalist wealth extraction as that. Can you imagine the collapse in executive bonuses that would follow a decision to define what the maximum profit made in any business transaction should be? Lobbyists across the political spectrum would be appalled — that extra profit pays for their meals. Looking to the extent to which the current application of the DMA seems to favor Apple’s biggest competitors, I can’t help but imagine it’s been paying for a few European meals already. Nice work, if you can get it.  You can follow me on social media! Join me on BlueSky,  LinkedIn, Mastodon, and MeWe.
    0 Yorumlar 0 hisse senetleri
  • Microsoft and Google pursue differing AI agent approaches in M365 and Workspace

    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said.

    In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace.

    Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions.

    Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet.

    “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research.

    But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based.

    Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said.

    When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.”

    Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace.

    “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said.

    Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team.

    Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said.

    But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said.

    “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said.

    Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates.

    Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said.

    And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said.

    Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included.
    #microsoft #google #pursue #differing #agent
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from per user per month for M365 Copilot to per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at per user per month for business plans with Gemini included. #microsoft #google #pursue #differing #agent
    WWW.COMPUTERWORLD.COM
    Microsoft and Google pursue differing AI agent approaches in M365 and Workspace
    Microsoft and Google are taking distinctive approaches with AI agents in their productivity suites, and enterprises need to account for the differences when formulating digital labor strategies, analysts said. In recent months, both companies have announced a dizzying array of new agents aimed at extracting value from corporate documents and maximizing efficiency. The tech giants have dropped numerous hints about where they’re headed with AI agents in their respective office suites, Microsoft 365 and Google Workspace. Microsoft is reshaping its Copilot assistant as a series of tools to create, tap into, and act on insights at individual and organizational levels. The Microsoft 365 roadmap lists hundreds of specialized AI tools under development to automate work for functions such as HR and accounting. The company is also developing smaller AI models to carry out specific functions. Google is going the opposite way, with its large-language model Gemini at the heart of Workspace. Google offers tools that include Gems for workers to create simple custom agents that automate tasks such as customer service, and Agentspace in Google Cloud to build more complex custom agents for collaboration and workflow management. At the recent Google I/O developer conference, the company added real-time speech translation to Google Meet. “For both, the goal is to bring usable and practical productivity and efficiency capabilities to work tools,” said Liz Miller, vice president and principal analyst at Constellation Research. But the differing AI agent strategies are heavily rooted in each company’s philosophical approaches to productivity. Although Microsoft has long encouraged customers to move from its traditional “perpetual-license” Office suite to the Microsoft 365 subscription-based model, M365 notably retains the familiar desktop apps. Google Workspace, on the other hand, has always been cloud-based. Microsoft users are typically a bit more tethered to traditional enterprise work styles, while Google has always been the “cloud-first darling for smaller organizations that still crave real-time collaboration,” Miller said. When it comes to the generative AI models being integrated into the two office suites, “Google’s Gemini models are beating out the models being deployed by Microsoft,” Miller said. “But as Microsoft expands its model ‘inventory’ in use across M365, this could change.” Microsoft has an advantage, as many desktop users live in Outlook or Word. The intelligence Copilot can bring from CRM software is readily available, while that integration is more complex in the cloud-native Google Workspace. “Microsoft still has an edge in a foundational understanding of work and the capacity to extend Copilot connections across applications as expansive as the Office suite through to Dynamics, giving AI a greater opportunity to be present in the spaces and presentation layers where workers enjoy working,” Miller said. Microsoft’s Copilot Agents and Google’s Gems and Agentspace are in their early stages, but there have been positive developments, said J.P. Gownder, a vice president and principal analyst on Forrester’s Future of Work team. Microsoft recently adopted Google’s A2A protocol, which makes it easier for users of both productivity suites to collaborate and unlock value from stagnant data sitting on other platforms. “That should be a win for interoperability,” Gownder said. But most companies that are Microsoft shops have years or decades of digital assets that hold them back from considering Google, he said. For example, Excel macros, pivot tables, and customizations cannot be easily or automatically migrated to Google Sheets, he said. “As early as this market is, I don’t think it’s fair to rank either player — Microsoft or Google — as being the leader; both of them are constructing new ecosystems to support the growth of agentic AI,” Gownder said. Most Microsoft Office users have moved to M365, but AI is helping Google is making inroads into larger organizations, especially among enterprises that are newer and less oriented toward legacy Microsoft products, said Jack Gold, principal analyst at J. Gold Associates. Technologies like A2A blur the line between on-premises and cloud productivity. As a result, “Google Workspace is no longer perceived as inferior, as it had been in the past,” Gold said. And for budget-constrained enterprises, the value of AI agent features is not the only consideration. “There is also the cost equation at work here, as Google seems to have a much more transparent cost structure than Microsoft with all of its user classes and discounts,” Gold said. Microsoft does not include Copilot in its M365 subscriptions, which vary in price depending on the type of customer. The Copilot business subscriptions range from $30 per user per month for M365 Copilot to $200 per month for 25,000 messages for Copilot Studio, which is also available under a pay-as-you-go model. Google has flat subscription pricing for Workspace, starting at $14 per user per month for business plans with Gemini included.
    0 Yorumlar 0 hisse senetleri
  • Remembering the controversial iOS 7 introduction

    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not rememberwhat that overhaul actually looked like.
    So here’s a quick refresher on what happened, and why this year will likelybe different.

    The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstallover the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design.
    Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system.
    Flat
    In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time.
    Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman.

    It was chaos.
    I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7:
    In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again.
    Why iOS 26 probably won’t be like iOS 7
    Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7.

    And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once.
    If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element.
    By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions.
    Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters.
    This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself.

    Add 9to5Mac to your Google News feed. 

    FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    #remembering #controversial #ios #introduction
    Remembering the controversial iOS 7 introduction
    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not rememberwhat that overhaul actually looked like. So here’s a quick refresher on what happened, and why this year will likelybe different. The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstallover the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design. Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system. Flat In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time. Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman. It was chaos. I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7: In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again. Why iOS 26 probably won’t be like iOS 7 Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7. And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once. If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element. By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions. Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters. This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel #remembering #controversial #ios #introduction
    9TO5MAC.COM
    Remembering the controversial iOS 7 introduction
    With just days to go before WWDC, the consensus is that Apple will unveil a big, visionOS-inspired redesign across its operating systems. And while some might be dreading a repeat of the iOS 7 announcement from a decade ago, it’s been long enough that many readers might not remember (or may have never even seen) what that overhaul actually looked like. So here’s a quick refresher on what happened, and why this year will likely (I mean, hopefully?) be different. The years between 2011 and 2013 were pretty busy at Apple. Following Steve Jobs’ passing, Apple fired Scott Forstall (then SVP of iOS Software) over the botched release of Apple Maps. That left a gap in software design leadership, which was filled by Jony Ive, who also led hardware design. Soon after, rumors began swirling that he was planning a major visual overhaul of the entire system. Flat In the run-up to WWDC 2013, the Wall Street Journal reported that Ive had been working on “a more ‘flat design’ that is starker and simpler,” a sharp departure from the great skeuomorphic visuals of the time (think linen textures, paper-like folders, glass effects, and yes, Corinthian leather). Some time after that, 9to5Mac exclusively shared mockups of the redesign, which had been leaked to Mark Gurman. It was chaos. I vividly remember thinking it was reckless to publish such unfairly primitive sketches of what would certainly be a more polished overhaul. After weeks of intense debate and fierce expectations that the rumors had been wrong, Apple introduced iOS 7: In the years that followed, Apple scaled back its over-flattening of the system, evolving toward what we have today. Now, that’s about to change once again. Why iOS 26 probably won’t be like iOS 7 Currently, most reports tend to agree that the redesign will be deeply influenced by the visual language of visionOS, with its translucent layers, depth effects, and soft glassy textures. And even if you’re like me and you’ve never worn an Apple Vision Pro, chances are you’ve seen what visionOS looks like. Apple has already laid the groundwork, so the change won’t be such a jarring surprise, like with iOS 7. And from a design perspective, speaking as someone who’s worked in graphic design for over two decades, the best move Apple could make is exactly what’s been reported: updating all systems at once. If you’ve ever had to adapt interfaces and key visuals to multiple concepts, such as wide, narrow, square, rectangular, big, small, etc., you know that with every new aspect ratio, you become a little more familiar and more comfortable with each individual element. By starting out with the virtually boundless, unconstrained environment of visionOS, then increasingly moving to smaller interfaces across macOS, iPadOS, iOS, and watchOS, every decision informs past and future visual adaptations. In other words, a redesign this broad can be iterative in both directions. Will it be beautiful? That’s subjective. Even iOS 7 had a handful of defenders. But one thing is certain: Apple’s design team knows how much this moment matters. This is the biggest task they’ve been given since Ive left the company, and they are well aware of the contentious history of iOS design updates. The mere fact that the new design hasn’t leaked yet points to the absence of dissidents inside the team, and considering how close we are to the announcement, that’s already a victory in itself. Add 9to5Mac to your Google News feed.  FTC: We use income earning auto affiliate links. More.You’re reading 9to5Mac — experts who break news about Apple and its surrounding ecosystem, day after day. Be sure to check out our homepage for all the latest news, and follow 9to5Mac on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our exclusive stories, reviews, how-tos, and subscribe to our YouTube channel
    0 Yorumlar 0 hisse senetleri
  • How cyber security professionals are leveraging AWS tools

    With millions of businesses now using Amazon Web Servicesfor their cloud computing needs, it’s become a vital consideration for IT security teams and professionals. As such, AWS offers a broad range of cyber security tools to secure AWS-based tech stacks. They cover areas such as data privacy, access management, configuration management, threat detection, network security, vulnerability management, regulatory compliance and so much more. 
    Along with being broad in scope, AWS security tools are also highly scalable and flexible. Therefore, they’re ideal for high-growth organisations facing a fast-expanding and increasingly sophisticated cyber threat landscape.
    On the downside, they can be complex to use, don’t always integrate well with multi-cloud environments, and become outdated and expensive quickly. These challenges underscore the importance of continual learning and effective cost management in the cyber security suite.
    One of the best things AWS offers cyber security professionals is a centralised view of all their different virtual environments, including patch management, vulnerability scanning and incident response, to achieve “smoother operations”, according to Richard LaTulip, field chief information security officer at cyber threat intelligence platform Recorded Future.
    Specifically, he says tools like AWS CloudTrail and AWS Config allow cyber security teams to accelerate access management, anomaly detection and real-time policy compliance, and that risk orchestration is also possible thanks to AWS’s support for specialist platforms such as Recorded Future. 
    This sentiment is echoed by Crystal Morin, cyber security strategist at container security firm Sysdig, who describes AWS CloudTrail and AWS GuardDuty as “the bedrock” for organisations with a multi- or hybrid cloud environment. 
    She says these tools offer “great insight” into cloud environment activity that can be used to identify issues affecting corporate systems, better understand them and ultimately determine their location for prompt removal. 

    Having made tons of cloud security deployments for Fortune 200 companies in his previous role as global AWS security lead at consulting giant Accenture, Shaan Mulchandani, founder and CEO of cloud security firm HTCD, knows a thing or two about AWS’s cyber security advantages. 
    Mulchandani says AWS implementations helped these companies secure their baseline configurations, streamline C-suite IT approvals to speed up AWS migration, eliminate manual post-migration security steps and seamlessly scale environments containing thousands of workloads. “I continue to help executives at organisations architect, deploy and maximise outcomes using AWS-native tools,” he adds.
    As a senior threat researcher at cyber intelligence platform EclecticIQ, Arda Büyükkaya uses AWS tools to scale threat behaviour analysis, develop secure malware analysis environments, and automate threat intelligence data collection and processing. 
    Calling AWS an “invaluable” threat analysis resource, he says the platform has made it a lot easier to roll out isolated research environments. “AWS’s scalability enables us to process large volumes of threat data efficiently, whilst their security services help maintain the integrity of our research infrastructure,” Büyükkaya tells Computer Weekly.
    At log management and security analytics software company Graylog, AWS usage happens across myriad teams. One of these is led by EMEA and UK lead Ross Brewer. His department is securing and protecting customer instances using tools like AWS GuardDuty, AWS Security Hub, AWS Config, AWS CloudTrail, AWS Web Application Firewall, AWS Inspector and AWS Identity and Access Management. 
    Its IT and application security department also relies on security logs provided by AWS GuardDuty and AWS CloudTrail to spot anomalies affecting customer instances. Brewer says the log tracking and monitoring abilities of these tools have been invaluable for security, compliance and risk management. “We haven’t had any issues with our desired implementations,” he adds.

    Cyber law attorney and entrepreneur Andrew Rossow is another firm believer in AWS as a cyber security tool. He thinks its strongest aspect is the centralised security management it offers for monitoring threats, responding to incidents and ensuring regulatory compliance, and describes the usage of this unified, data-rich dashboard as the “difference between proactive defence and costly damage control” for small businesses with limited resources. 
    But Rossow believes this platform’s secret sauce is its underlying artificial intelligenceand machine learning models, which power background threat tracking, and automatically alert users to security issues, data leaks and suspicious activity. These abilities, he says, allow cyber security professionals to “stay ahead of potential crises”.
    Another area where Rossow thinks AWS excels is its integration with regulatory frameworks such as the California Consumer Privacy Act, the General Data Protection Regulation and the Payment Card Industry Data Security Standard. He explains that AWS Config and AWS Security Hub offer configuration and resource auditing to ensure business activities and best practices meet such industry standards. “This not only protects our clients, but also shields us from the legal and reputational fallout of non-compliance,” adds Rossow.
    AWS tools provide cyber security teams with “measurable value”, argues Shivraj Borade, senior analyst at management consulting firm Everest Group. He says GuardDuty is powerful for real-time monitoring, AWS Config for security posture management and IAM Access Analyzer for privilege sprawl prevention. “What makes these tools powerful is their interoperability, enabling a scalable and cohesive security architecture,” says Borade.

    Although AWS is a valuable tool for cyber security professionals, Borade emphasises that it’s “not without limitations”. He says the platform’s lack of depth and flexibility means it isn’t always suitable for modelling complex cyber security threats or handling specific compliance issues. Rather, cyber security professionals should use AWS as a foundational element of their wider tech stack. 
    Using the AWS Security Hub as an example, Borade says it can effectively serve the purpose of an “aggregation layer”. But he warns that incorrect configurations often result in alert fatigue, meaning people can become oblivious to notifications when repeatedly spammed with them. 
    Borade also warns of misconfigurations arising from teams’ lack of understanding of how cloud technology works. Consequently, he urges cyber security teams to “embed cloud-native security into the DevSecOps lifecycle” and “invest in continuous cross-functional training”.
    For Morin, the biggest challenge of using AWS as a security tool is that it’s constrained by best practice gaps around areas like workload protection, vulnerability management, identity management and threat detection. She says one classic example is the difficulty cyber security teams face when monitoring access permissions granted over time, leaving organisations with large IT environments dangerously exposed. 
    Using multiple AWS security tools also increases the attack surface for cyber criminals to exploit. Morin warns that hackers may look for “visibility gaps” by sifting through different AWS planes, helping them “mask their activities” and “effectively bypass detection”. To stay one step ahead of cyber crooks, she advises organisations to invest in runtime solutions alongside AWS-native tools. These will provide real-time security insights.
    Technical and cost issues may also impact AWS implementations in cyber security departments, warns Mulchandani. For instance, Amazon Macie may be able to create inventories for all object versions across different buckets, but Mulchandani says this creates a “mountain of medium-severity findings” to decipher.
    “Without strict scoping, licence costs and analyst time balloon,” he adds. “Costs can also increase when an organisation requires a new AWS launch that isn’t available in their region and they subsequently invest in a temporary solution from a different vendor.

    For those new to using AWS security tools, Morin says an important first step is to understand the cloud security shared responsibility model. She explains that the user is responsible for securing their deployments, correctly configuring them and closing any security visibility gaps. AWS, on the other hand, must ensure the underlying infrastructure provided is safe to use. 
    As part of the users’ role in this model, she says they should enable logging and alerts for AWS tools and services used in their organisation. What’s also key is detailing standard organisational operating behaviour in a security baseline. This, she claims, will let organisations tell suspicious user actions apart from normal ones.
    Many tried-and-tested best practices can be found in professional benchmarks such as the AWS Well-Architected framework and the Center of Internet Security’s Benchmark for AWS. “Make use of the work of those who have been fighting the good fight,” says Morin.
    Finally, she urges anyone working in cloud security to remember that real-time operations are essential. Runtime security can help by protecting all running applications and data from the latest cyber security threats, many of which are preventable through automated processes. 
    Starting small is a good idea, too. Mulchandani recommends that AWS newbies begin with AWS tooling, and if any gaps persist, they can then look for third-party offerings. “Do not try to procure and integrate 20-plus external tools upfront as this will cause numerous architectural, security and cost challenges,” he says.
    With the rapid pace of innovation across the AWS ecosystem, Borade urges anyone using this platform to stay up-to-date with the latest releases by participating in certification programmes, attending re:Inforce sessions and tracking the latest release notes from AWS. In the future, he expects automation, AI-fuelled insights, “tighter” third-party integrations, and identity orchestration and policy-as-code frameworks to dominate the AWS cyber security ecosystem. 
    On the whole, understanding the AWS platform and its role in cloud security is a vital skill for cyber security professionals. And AWS certainly offers some great tools for managing the biggest risks impacting its popular cloud platform. But cyber security professionals looking to leverage AWS in their day-to-day roles must be willing to get to grips with some complex tools, keep up-to-date with the latest releases in the vast AWS ecosystem and ensure their department budget can accommodate spiralling AWS costs.

    about AWS

    An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress.
    The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies.
    Amazon Web Services debuts new Outposts racks and servers that extend its infrastructure to the edge to support network intensive workloads and cloud radio access applications.
    #how #cyber #security #professionals #are
    How cyber security professionals are leveraging AWS tools
    With millions of businesses now using Amazon Web Servicesfor their cloud computing needs, it’s become a vital consideration for IT security teams and professionals. As such, AWS offers a broad range of cyber security tools to secure AWS-based tech stacks. They cover areas such as data privacy, access management, configuration management, threat detection, network security, vulnerability management, regulatory compliance and so much more.  Along with being broad in scope, AWS security tools are also highly scalable and flexible. Therefore, they’re ideal for high-growth organisations facing a fast-expanding and increasingly sophisticated cyber threat landscape. On the downside, they can be complex to use, don’t always integrate well with multi-cloud environments, and become outdated and expensive quickly. These challenges underscore the importance of continual learning and effective cost management in the cyber security suite. One of the best things AWS offers cyber security professionals is a centralised view of all their different virtual environments, including patch management, vulnerability scanning and incident response, to achieve “smoother operations”, according to Richard LaTulip, field chief information security officer at cyber threat intelligence platform Recorded Future. Specifically, he says tools like AWS CloudTrail and AWS Config allow cyber security teams to accelerate access management, anomaly detection and real-time policy compliance, and that risk orchestration is also possible thanks to AWS’s support for specialist platforms such as Recorded Future.  This sentiment is echoed by Crystal Morin, cyber security strategist at container security firm Sysdig, who describes AWS CloudTrail and AWS GuardDuty as “the bedrock” for organisations with a multi- or hybrid cloud environment.  She says these tools offer “great insight” into cloud environment activity that can be used to identify issues affecting corporate systems, better understand them and ultimately determine their location for prompt removal.  Having made tons of cloud security deployments for Fortune 200 companies in his previous role as global AWS security lead at consulting giant Accenture, Shaan Mulchandani, founder and CEO of cloud security firm HTCD, knows a thing or two about AWS’s cyber security advantages.  Mulchandani says AWS implementations helped these companies secure their baseline configurations, streamline C-suite IT approvals to speed up AWS migration, eliminate manual post-migration security steps and seamlessly scale environments containing thousands of workloads. “I continue to help executives at organisations architect, deploy and maximise outcomes using AWS-native tools,” he adds. As a senior threat researcher at cyber intelligence platform EclecticIQ, Arda Büyükkaya uses AWS tools to scale threat behaviour analysis, develop secure malware analysis environments, and automate threat intelligence data collection and processing.  Calling AWS an “invaluable” threat analysis resource, he says the platform has made it a lot easier to roll out isolated research environments. “AWS’s scalability enables us to process large volumes of threat data efficiently, whilst their security services help maintain the integrity of our research infrastructure,” Büyükkaya tells Computer Weekly. At log management and security analytics software company Graylog, AWS usage happens across myriad teams. One of these is led by EMEA and UK lead Ross Brewer. His department is securing and protecting customer instances using tools like AWS GuardDuty, AWS Security Hub, AWS Config, AWS CloudTrail, AWS Web Application Firewall, AWS Inspector and AWS Identity and Access Management.  Its IT and application security department also relies on security logs provided by AWS GuardDuty and AWS CloudTrail to spot anomalies affecting customer instances. Brewer says the log tracking and monitoring abilities of these tools have been invaluable for security, compliance and risk management. “We haven’t had any issues with our desired implementations,” he adds. Cyber law attorney and entrepreneur Andrew Rossow is another firm believer in AWS as a cyber security tool. He thinks its strongest aspect is the centralised security management it offers for monitoring threats, responding to incidents and ensuring regulatory compliance, and describes the usage of this unified, data-rich dashboard as the “difference between proactive defence and costly damage control” for small businesses with limited resources.  But Rossow believes this platform’s secret sauce is its underlying artificial intelligenceand machine learning models, which power background threat tracking, and automatically alert users to security issues, data leaks and suspicious activity. These abilities, he says, allow cyber security professionals to “stay ahead of potential crises”. Another area where Rossow thinks AWS excels is its integration with regulatory frameworks such as the California Consumer Privacy Act, the General Data Protection Regulation and the Payment Card Industry Data Security Standard. He explains that AWS Config and AWS Security Hub offer configuration and resource auditing to ensure business activities and best practices meet such industry standards. “This not only protects our clients, but also shields us from the legal and reputational fallout of non-compliance,” adds Rossow. AWS tools provide cyber security teams with “measurable value”, argues Shivraj Borade, senior analyst at management consulting firm Everest Group. He says GuardDuty is powerful for real-time monitoring, AWS Config for security posture management and IAM Access Analyzer for privilege sprawl prevention. “What makes these tools powerful is their interoperability, enabling a scalable and cohesive security architecture,” says Borade. Although AWS is a valuable tool for cyber security professionals, Borade emphasises that it’s “not without limitations”. He says the platform’s lack of depth and flexibility means it isn’t always suitable for modelling complex cyber security threats or handling specific compliance issues. Rather, cyber security professionals should use AWS as a foundational element of their wider tech stack.  Using the AWS Security Hub as an example, Borade says it can effectively serve the purpose of an “aggregation layer”. But he warns that incorrect configurations often result in alert fatigue, meaning people can become oblivious to notifications when repeatedly spammed with them.  Borade also warns of misconfigurations arising from teams’ lack of understanding of how cloud technology works. Consequently, he urges cyber security teams to “embed cloud-native security into the DevSecOps lifecycle” and “invest in continuous cross-functional training”. For Morin, the biggest challenge of using AWS as a security tool is that it’s constrained by best practice gaps around areas like workload protection, vulnerability management, identity management and threat detection. She says one classic example is the difficulty cyber security teams face when monitoring access permissions granted over time, leaving organisations with large IT environments dangerously exposed.  Using multiple AWS security tools also increases the attack surface for cyber criminals to exploit. Morin warns that hackers may look for “visibility gaps” by sifting through different AWS planes, helping them “mask their activities” and “effectively bypass detection”. To stay one step ahead of cyber crooks, she advises organisations to invest in runtime solutions alongside AWS-native tools. These will provide real-time security insights. Technical and cost issues may also impact AWS implementations in cyber security departments, warns Mulchandani. For instance, Amazon Macie may be able to create inventories for all object versions across different buckets, but Mulchandani says this creates a “mountain of medium-severity findings” to decipher. “Without strict scoping, licence costs and analyst time balloon,” he adds. “Costs can also increase when an organisation requires a new AWS launch that isn’t available in their region and they subsequently invest in a temporary solution from a different vendor. For those new to using AWS security tools, Morin says an important first step is to understand the cloud security shared responsibility model. She explains that the user is responsible for securing their deployments, correctly configuring them and closing any security visibility gaps. AWS, on the other hand, must ensure the underlying infrastructure provided is safe to use.  As part of the users’ role in this model, she says they should enable logging and alerts for AWS tools and services used in their organisation. What’s also key is detailing standard organisational operating behaviour in a security baseline. This, she claims, will let organisations tell suspicious user actions apart from normal ones. Many tried-and-tested best practices can be found in professional benchmarks such as the AWS Well-Architected framework and the Center of Internet Security’s Benchmark for AWS. “Make use of the work of those who have been fighting the good fight,” says Morin. Finally, she urges anyone working in cloud security to remember that real-time operations are essential. Runtime security can help by protecting all running applications and data from the latest cyber security threats, many of which are preventable through automated processes.  Starting small is a good idea, too. Mulchandani recommends that AWS newbies begin with AWS tooling, and if any gaps persist, they can then look for third-party offerings. “Do not try to procure and integrate 20-plus external tools upfront as this will cause numerous architectural, security and cost challenges,” he says. With the rapid pace of innovation across the AWS ecosystem, Borade urges anyone using this platform to stay up-to-date with the latest releases by participating in certification programmes, attending re:Inforce sessions and tracking the latest release notes from AWS. In the future, he expects automation, AI-fuelled insights, “tighter” third-party integrations, and identity orchestration and policy-as-code frameworks to dominate the AWS cyber security ecosystem.  On the whole, understanding the AWS platform and its role in cloud security is a vital skill for cyber security professionals. And AWS certainly offers some great tools for managing the biggest risks impacting its popular cloud platform. But cyber security professionals looking to leverage AWS in their day-to-day roles must be willing to get to grips with some complex tools, keep up-to-date with the latest releases in the vast AWS ecosystem and ensure their department budget can accommodate spiralling AWS costs. about AWS An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress. The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Amazon Web Services debuts new Outposts racks and servers that extend its infrastructure to the edge to support network intensive workloads and cloud radio access applications. #how #cyber #security #professionals #are
    WWW.COMPUTERWEEKLY.COM
    How cyber security professionals are leveraging AWS tools
    With millions of businesses now using Amazon Web Services (AWS) for their cloud computing needs, it’s become a vital consideration for IT security teams and professionals. As such, AWS offers a broad range of cyber security tools to secure AWS-based tech stacks. They cover areas such as data privacy, access management, configuration management, threat detection, network security, vulnerability management, regulatory compliance and so much more.  Along with being broad in scope, AWS security tools are also highly scalable and flexible. Therefore, they’re ideal for high-growth organisations facing a fast-expanding and increasingly sophisticated cyber threat landscape. On the downside, they can be complex to use, don’t always integrate well with multi-cloud environments, and become outdated and expensive quickly. These challenges underscore the importance of continual learning and effective cost management in the cyber security suite. One of the best things AWS offers cyber security professionals is a centralised view of all their different virtual environments, including patch management, vulnerability scanning and incident response, to achieve “smoother operations”, according to Richard LaTulip, field chief information security officer at cyber threat intelligence platform Recorded Future. Specifically, he says tools like AWS CloudTrail and AWS Config allow cyber security teams to accelerate access management, anomaly detection and real-time policy compliance, and that risk orchestration is also possible thanks to AWS’s support for specialist platforms such as Recorded Future.  This sentiment is echoed by Crystal Morin, cyber security strategist at container security firm Sysdig, who describes AWS CloudTrail and AWS GuardDuty as “the bedrock” for organisations with a multi- or hybrid cloud environment.  She says these tools offer “great insight” into cloud environment activity that can be used to identify issues affecting corporate systems, better understand them and ultimately determine their location for prompt removal.  Having made tons of cloud security deployments for Fortune 200 companies in his previous role as global AWS security lead at consulting giant Accenture, Shaan Mulchandani, founder and CEO of cloud security firm HTCD, knows a thing or two about AWS’s cyber security advantages.  Mulchandani says AWS implementations helped these companies secure their baseline configurations, streamline C-suite IT approvals to speed up AWS migration, eliminate manual post-migration security steps and seamlessly scale environments containing thousands of workloads. “I continue to help executives at organisations architect, deploy and maximise outcomes using AWS-native tools,” he adds. As a senior threat researcher at cyber intelligence platform EclecticIQ, Arda Büyükkaya uses AWS tools to scale threat behaviour analysis, develop secure malware analysis environments, and automate threat intelligence data collection and processing.  Calling AWS an “invaluable” threat analysis resource, he says the platform has made it a lot easier to roll out isolated research environments. “AWS’s scalability enables us to process large volumes of threat data efficiently, whilst their security services help maintain the integrity of our research infrastructure,” Büyükkaya tells Computer Weekly. At log management and security analytics software company Graylog, AWS usage happens across myriad teams. One of these is led by EMEA and UK lead Ross Brewer. His department is securing and protecting customer instances using tools like AWS GuardDuty, AWS Security Hub, AWS Config, AWS CloudTrail, AWS Web Application Firewall (WAF), AWS Inspector and AWS Identity and Access Management (IAM).  Its IT and application security department also relies on security logs provided by AWS GuardDuty and AWS CloudTrail to spot anomalies affecting customer instances. Brewer says the log tracking and monitoring abilities of these tools have been invaluable for security, compliance and risk management. “We haven’t had any issues with our desired implementations,” he adds. Cyber law attorney and entrepreneur Andrew Rossow is another firm believer in AWS as a cyber security tool. He thinks its strongest aspect is the centralised security management it offers for monitoring threats, responding to incidents and ensuring regulatory compliance, and describes the usage of this unified, data-rich dashboard as the “difference between proactive defence and costly damage control” for small businesses with limited resources.  But Rossow believes this platform’s secret sauce is its underlying artificial intelligence (AI) and machine learning models, which power background threat tracking, and automatically alert users to security issues, data leaks and suspicious activity. These abilities, he says, allow cyber security professionals to “stay ahead of potential crises”. Another area where Rossow thinks AWS excels is its integration with regulatory frameworks such as the California Consumer Privacy Act, the General Data Protection Regulation and the Payment Card Industry Data Security Standard. He explains that AWS Config and AWS Security Hub offer configuration and resource auditing to ensure business activities and best practices meet such industry standards. “This not only protects our clients, but also shields us from the legal and reputational fallout of non-compliance,” adds Rossow. AWS tools provide cyber security teams with “measurable value”, argues Shivraj Borade, senior analyst at management consulting firm Everest Group. He says GuardDuty is powerful for real-time monitoring, AWS Config for security posture management and IAM Access Analyzer for privilege sprawl prevention. “What makes these tools powerful is their interoperability, enabling a scalable and cohesive security architecture,” says Borade. Although AWS is a valuable tool for cyber security professionals, Borade emphasises that it’s “not without limitations”. He says the platform’s lack of depth and flexibility means it isn’t always suitable for modelling complex cyber security threats or handling specific compliance issues. Rather, cyber security professionals should use AWS as a foundational element of their wider tech stack.  Using the AWS Security Hub as an example, Borade says it can effectively serve the purpose of an “aggregation layer”. But he warns that incorrect configurations often result in alert fatigue, meaning people can become oblivious to notifications when repeatedly spammed with them.  Borade also warns of misconfigurations arising from teams’ lack of understanding of how cloud technology works. Consequently, he urges cyber security teams to “embed cloud-native security into the DevSecOps lifecycle” and “invest in continuous cross-functional training”. For Morin, the biggest challenge of using AWS as a security tool is that it’s constrained by best practice gaps around areas like workload protection, vulnerability management, identity management and threat detection. She says one classic example is the difficulty cyber security teams face when monitoring access permissions granted over time, leaving organisations with large IT environments dangerously exposed.  Using multiple AWS security tools also increases the attack surface for cyber criminals to exploit. Morin warns that hackers may look for “visibility gaps” by sifting through different AWS planes, helping them “mask their activities” and “effectively bypass detection”. To stay one step ahead of cyber crooks, she advises organisations to invest in runtime solutions alongside AWS-native tools. These will provide real-time security insights. Technical and cost issues may also impact AWS implementations in cyber security departments, warns Mulchandani. For instance, Amazon Macie may be able to create inventories for all object versions across different buckets, but Mulchandani says this creates a “mountain of medium-severity findings” to decipher. “Without strict scoping, licence costs and analyst time balloon,” he adds. “Costs can also increase when an organisation requires a new AWS launch that isn’t available in their region and they subsequently invest in a temporary solution from a different vendor. For those new to using AWS security tools, Morin says an important first step is to understand the cloud security shared responsibility model. She explains that the user is responsible for securing their deployments, correctly configuring them and closing any security visibility gaps. AWS, on the other hand, must ensure the underlying infrastructure provided is safe to use.  As part of the users’ role in this model, she says they should enable logging and alerts for AWS tools and services used in their organisation. What’s also key is detailing standard organisational operating behaviour in a security baseline. This, she claims, will let organisations tell suspicious user actions apart from normal ones. Many tried-and-tested best practices can be found in professional benchmarks such as the AWS Well-Architected framework and the Center of Internet Security’s Benchmark for AWS. “Make use of the work of those who have been fighting the good fight,” says Morin. Finally, she urges anyone working in cloud security to remember that real-time operations are essential. Runtime security can help by protecting all running applications and data from the latest cyber security threats, many of which are preventable through automated processes.  Starting small is a good idea, too. Mulchandani recommends that AWS newbies begin with AWS tooling, and if any gaps persist, they can then look for third-party offerings. “Do not try to procure and integrate 20-plus external tools upfront as this will cause numerous architectural, security and cost challenges,” he says. With the rapid pace of innovation across the AWS ecosystem, Borade urges anyone using this platform to stay up-to-date with the latest releases by participating in certification programmes, attending re:Inforce sessions and tracking the latest release notes from AWS. In the future, he expects automation, AI-fuelled insights, “tighter” third-party integrations, and identity orchestration and policy-as-code frameworks to dominate the AWS cyber security ecosystem.  On the whole, understanding the AWS platform and its role in cloud security is a vital skill for cyber security professionals. And AWS certainly offers some great tools for managing the biggest risks impacting its popular cloud platform. But cyber security professionals looking to leverage AWS in their day-to-day roles must be willing to get to grips with some complex tools, keep up-to-date with the latest releases in the vast AWS ecosystem and ensure their department budget can accommodate spiralling AWS costs. Read more about AWS An AWS tech stack can aid business growth and facilitate efficient operations, but misconfigurations have become all too common and stall this progress. The AWS Summit in London saw the public cloud giant appoint itself to take on the task of skilling up hundreds of thousands of UK people in using AI technologies. Amazon Web Services debuts new Outposts racks and servers that extend its infrastructure to the edge to support network intensive workloads and cloud radio access applications.
    0 Yorumlar 0 hisse senetleri