• Trump, TikTok, ban, social media, technology news, United States, youth culture, digital trends, internet freedom

    In a move that has left many users relieved and hopeful, former President Donald Trump has announced the third delay of the TikTok ban. This decision resonates positively with millions of devoted TikTok users across the United States and beyond, who have made this platform an integral part of their daily lives. As we navigate this ongoing saga, let’s take a moment to appreciate the ...
    Trump, TikTok, ban, social media, technology news, United States, youth culture, digital trends, internet freedom In a move that has left many users relieved and hopeful, former President Donald Trump has announced the third delay of the TikTok ban. This decision resonates positively with millions of devoted TikTok users across the United States and beyond, who have made this platform an integral part of their daily lives. As we navigate this ongoing saga, let’s take a moment to appreciate the ...
    Trump Delays TikTok Ban for the Third Time: A Hopeful Perspective
    Trump, TikTok, ban, social media, technology news, United States, youth culture, digital trends, internet freedom In a move that has left many users relieved and hopeful, former President Donald Trump has announced the third delay of the TikTok ban. This decision resonates positively with millions of devoted TikTok users across the United States and beyond, who have made this platform an...
    Like
    Love
    Wow
    Angry
    Sad
    108
    1 Commentarios 0 Acciones
  • Four science-based rules that will make your conversations flow

    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy
    Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats.
    David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance?
    Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail.

    Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from?
    I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently.

    Receive a weekly dose of discovery in your inbox.

    Sign up to newsletter

    The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too.
    “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about?
    My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different.
    Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos
    What’s your advice when making these decisions?
    There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else.
    After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it.
    The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is?
    Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already.

    What kinds of questions should we be asking – and avoiding?
    In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful.
    There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it.
    Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno
    What are the benefits of levity?
    When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again.
    Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room.
    Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian?
    Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud.

    This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like?
    Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it.
    Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone?
    Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far.
    Topics:
    #four #sciencebased #rules #that #will
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics: #four #sciencebased #rules #that #will
    WWW.NEWSCIENTIST.COM
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, and [my students] all get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomans [at Imperial College London] and I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Commentarios 0 Acciones
  • The Best Hidden-Gem Etsy Shops for Fans of Farmhouse Style

    Becky Luigart-Stayner for Country LivingCountry Living editors select each product featured. If you buy from a link, we may earn a commission. Why Trust Us?Like a well-made quilt, a classic farmhouse aesthetic comes together gradually—a little bit of this, a touch of that. Each addition is purposeful and personal—and isn’t that what home is all about, really? If this type of slowed-down style speaks to you, you're probably already well aware that Etsy is a treasure trove of finds both new and old to fit your timeless farmhouse aesthetic. But with more than eight million active sellers on its marketplace, sometimes the possibilities—vintage feed sacks! primitive pie safes! galvanized grain scoops!—can quickly go from enticing to overwhelming.To better guide your search for the finest farmhouse furnishings, we’ve gathered a go-to list of editor-and designer-beloved Etsy shops which, time and again, turn out hardworking, homespun pieces of heirloom quality. From beautiful antique bureaus to hand-block-printed table linens, the character-rich wares from these sellers will help you design the farmhouse of your dreams, piece by precious piece. Related Stories For Antique AmericanaAcorn and Alice Every good old-fashioned farmhouse could use some traditional Americana to set the tone, and this Pennsylvania salvage shop offers rustic touches loaded with authentic antique allure. Aged wooden wares abound, as well as a grab bag of cotton and burlap feed sacks, perfect for framing as sets or crafting into footstool covers or throw pillows. For French Country TextilesForest and LinenThere’s nothing quite like breezy natural fabrics to make you want to throw open all the windows and let that country air in while the pie cools. Unfussy and lightweight, the hand-crafted curtains, bedding, and table linens from these Lithuanian textile experts have a classic understated quality that would be right at home in the coziest guest room or most bustling kitchen. Warm, welcoming hues range from marigold yellow to cornflower blue, but soft gingham checkers and timeless French ticking feel especially farm-fresh. Our current favorite? These cherry-striped country cafe curtains. Becky Luigart-Stayner for Country LivingVintage red torchons feel right at home in a farmhouse kitchenFor Rustic RugsOld New HouseWhether or not you’re lucky enough to have gorgeous wide-plank floors, an antique area rug or runner can work wonders for giving a room instant character and warmth. This fifth-generation family-run retailer specializes in importing heirloom hand-knotted carpets dating back to the 1800s, with a focus on traditional designs from the masters in Turkey, India, Persia, and more. Their vast variety of sizes and styles offers something for every aesthetic, with one-of-a-kind patterns ranging from distressed neutrals to chain-stitched florals to ornate arabesques. For Pillows and ProvisionsHabitation BohemeIn true farmhouse fashion, this Indiana shop has curated an enticing blend of handcrafted and vintage homewares that work effortlessly well together. A line of cozy hand-stitched linen pillow coverssits prettily alongside a mix of found objects, from patinated brass candlesticks and etched cloisonné vases to sturdy stoneware crockery and woven wicker baskets. For Elegant Everyday DishwareConvivial ProductionSimple, yet undeniably stunning, the handcrafted dinnerware from this Missouri-based ceramist is designed with durability in mind. Produced in a single, time-tested shade of ivory white glaze, these practical stoneware cups, bowls, and plates make the perfect place settings for lively farm-to-table feasts with friends and family. Beautifully balancing softness and heft, each dish is meant to feel comfortable when being held and passed, but also to look attractive when stacked upon open shelving. For English Country Antiques1100 West Co.This Illinois antiques shop is stocked with all manner of versatile vintage vessels culled from the English countryside, from massive stoneware crocks to charming little escargot pots. Their collection of neutral containers can be adapted for nearly any provincial purpose, but we especially love their assortment of old advertising—from toothpaste pots to marmalade jars and ginger beer bottles galore—for a nice little nod to the quintessential country practice of repurposing what you’ve got. Brian Woodcock/Country LivingPretty English ironstone will always have our heart.For a Cozy GlowOlde Brick LightingConstructed by hand from cord to shade, the vintage-inspired lighting produced by this Pennsylvania retailer is a tribute to the iconic quality and character of old American fixtures. Nostalgic design elements include hand-blown glassand finishes ranging from matte black to brushed nickel and antique brass. To create an authentic farmhouse ambiance, check out their gooseneck sconces, enameled red and blue barn lights, and milky white striped schoolhouse flush mounts. For Enduring ArtifactsThrough the PortholeThe weathered, artisan-made wares curated by this California husband-and-wife duo have been hand-selected from around the globe for their time-etched character. From gorgeous gray-black terracotta vases and rust-colored Turkish clay pots to patinated brass cow bells and rustic reclaimed elm stools, each item is a testament to the lasting beauty of classic materials, with storied sun-bleaching and scratches befitting the most beloved, lived-in rooms. For Winsome Wall ArtEugenia Ciotola ArtThrough graceful brushstrokes and textural swirls of paint, Maryland-based artist Eugenia Ciotola has captured the natural joy of a life that’s simple and sweet. Her pieces celebrate quiet scenes of bucolic beauty, from billowing bouquets of peonies to stoic red barns sitting in fields of wavy green. For a parlor gallery or gathering space, we gravitate toward her original oils on canvas—an impasto still life, perhaps, or a plainly frocked maiden carrying a bountiful bowl of lemons—while her stately farm animal portraitswould look lovely in a child’s nursery.For Time-Tested Storage SolutionsMaterials DivisionFunction is forefront for this farmhouse supplier operating out of New York, whose specialized selection of vintage provisions have lived out dutiful lives of purpose. Standouts include a curated offering of trusty antique tool boxes and sturdy steel-clad trunks whose rugged patina tells the story of many-a household project. Meanwhile, a hardworking mix of industrial wire and woven wood gathering baskets sits handsomely alongside heavy-duty galvanized garbage bins and antique fireplace andirons.For Pastoral PrimitivesComfort Work RoomFull of history and heritage, the old, hand-fabricated furnishings and primitive wooden tools in this unique Ukrainian antique shop are rural remnants of simpler times gone by. Quaint kitchen staples like chippy chiseled spoons, scoops, and cutting boards make an accessible entry point for the casual collector, while scuffed up dough troughs, butter churns, washboards, and barrels are highly desirable conversation pieces for any antique enthusiast who’s dedicated to authentic detail. Becky Luigart-Stayner for Country LivingAntique washboards make for on-theme wall art in a laundry roomFor Heirloom-Quality CoverletsBluegrass QuiltsNo layered farmhouse look would be complete without the homey, tactile touch of a hand-pieced quilt or two draped intentionally about the room. From harvest-hued sawtooth stars to playful patchwork pinwheels, each exquisite blanket from this Kentucky-based artisan is slow-crafted in traditional fashion from 100% cotton materials, and can even be custom stitched from scratch to match your personal color palette and decorative purpose. For a classic country aesthetic, try a log cabin, double diamond, or star patch pattern. For Hand-Crafted GiftsSelselaFeaturing a busy barnyard’s worth of plucky chickens, cuddly sheep, and happy little Holstein cows, this Illinois woodworker’s whimsical line of farm figurines and other giftable goodiesis chock-full of hand-carved charm. Crafted from 100% recycled birch and painted in loving detail, each creature has a deliberately rough-hewn look and feel worthy of any cozy and collected home. For Open-Concept CabinetryFolkhausA hallmark of many modern farmhouses, open-concept shelving has become a stylish way to show that the practical wares you use everyday are the same ones you’re proud to put on display. With their signature line of bracketed wall shelves, Shaker-style peg shelves, and raw steel kitchen rails, the team at Folkhaus has created a range of open storage solutions that beautifully balances elevated design and rustic utility. Rounding out their collection is a selection of open-shelved accent pieces like bookcases, benches, and console tables—each crafted from character-rich kiln-dried timber and finished in your choice of stain.Related StoryFor Antique Farmhouse FurnitureCottage Treasures LVThe foundation of a well-furnished farmhouse often begins with a single prized piece. Whether it’s a slant-front desk, a primitive jelly cabinet, or a punched-tin pie safe, this established New York-based dealer has a knack for sourcing vintage treasures with the personality and presence to anchor an entire space. Distressed cupboards and cabinets may be their bread and butterbut you’ll also find a robust roundup of weathered farm tables, Windsor chairs, and blanket chests—and currently, even a rare 1500s English bench. For Lively Table LinensMoontea StudioAs any devotee of slow decorating knows, sometimes it’s the little details that really bring a look home. For a spot of cheer along with your afternoon tea, we love the hand-stamped table linens from this Washington-based printmaker, which put a peppy, modern spin on farm-fresh produce. Patterned with lush illustrations of bright red tomatoes, crisp green apples, and golden sunflowers—then neatly finished with a color-coordinated hand-stitched trim—each tea towel, placemat, and napkin pays homage to the hours we spend doting over our gardens. For Traditional TransferwarePrior TimeThere’s lots to love about this Massachusetts antiques shop, which admittedly skews slightly cottagecorebut the standout, for us, is the seller’s superior selection of dinner and serving ware. In addition to a lovely lot of mottled white ironstone platters and pitchers, you’ll find a curated mix of Ridgeway and Wedgwood transferware dishes in not only classic cobalt blue, but beautiful browns, greens, and purples, too.Becky Luigart-Stayner for Country LivingPretty brown transferware could be yours with one quick "add to cart."For Folk Art for Your FloorsKinFolk ArtworkDesigned by a West Virginia watercolor and oils artist with a penchant for painting the past, these silky chenille floor mats feature an original cast of colonial characters and folksy scenes modeled after heirloom textiles from the 18th and 19th centuries. Expect lots of early American and patriotic motifs, including old-fashioned flags, Pennsylvania Dutch fraktur, equestrian vignettes, and colonial house samplers—each made to mimic a vintage hooked rug for that cozy, homespun feeling.For Historical ReproductionsSchooner Bay Co.Even in the most painstakingly appointed interior, buying antique originals isn’t always an option. And that’s where this trusted Pennsylvania-based retailer for historical reproductions comes in. Offering a colossal collection of framed art prints, decorative trays, and brass objects, these connoisseurs of the classics have decor for every old-timey aesthetic, whether it’s fox hunt prints for your cabin, Dutch landscapes for your cottage, or primitive animal portraits for your farmstead.For General Store StaplesFarmhouse EclecticsHand-plucked from New England antique shops, estate sales, and auctions, the salvaged sundries from this Massachusetts-based supplierare the type you might spy in an old country store—wooden crates emblazoned with the names of local dairies, antique apple baskets, seed displays, signs, and scales. Whether you’re setting up your farmstand or styling your entryway, you’ll have plenty of storage options and authentic accents to pick from here. Becky Luigart-Stayner for Country LivingSo many food scales, so little time.Related StoriesJackie BuddieJackie Buddie is a freelance writer with more than a decade of editorial experience covering lifestyle topics including home decor how-tos, fashion trend deep dives, seasonal gift guides, and in-depth profiles of artists and creatives around the globe. She holds a degree in journalism from the University of North Carolina at Chapel Hill and received her M.F.A. in creative writing from Boston University. Jackie is, among other things, a collector of curiosities, Catskills land caretaker, dabbling DIYer, day hiker, and mom. She lives in the hills of Bovina, New York, with her family and her sweet-as-pie rescue dog.
    #best #hiddengem #etsy #shops #fans
    The Best Hidden-Gem Etsy Shops for Fans of Farmhouse Style
    Becky Luigart-Stayner for Country LivingCountry Living editors select each product featured. If you buy from a link, we may earn a commission. Why Trust Us?Like a well-made quilt, a classic farmhouse aesthetic comes together gradually—a little bit of this, a touch of that. Each addition is purposeful and personal—and isn’t that what home is all about, really? If this type of slowed-down style speaks to you, you're probably already well aware that Etsy is a treasure trove of finds both new and old to fit your timeless farmhouse aesthetic. But with more than eight million active sellers on its marketplace, sometimes the possibilities—vintage feed sacks! primitive pie safes! galvanized grain scoops!—can quickly go from enticing to overwhelming.To better guide your search for the finest farmhouse furnishings, we’ve gathered a go-to list of editor-and designer-beloved Etsy shops which, time and again, turn out hardworking, homespun pieces of heirloom quality. From beautiful antique bureaus to hand-block-printed table linens, the character-rich wares from these sellers will help you design the farmhouse of your dreams, piece by precious piece. Related Stories For Antique AmericanaAcorn and Alice Every good old-fashioned farmhouse could use some traditional Americana to set the tone, and this Pennsylvania salvage shop offers rustic touches loaded with authentic antique allure. Aged wooden wares abound, as well as a grab bag of cotton and burlap feed sacks, perfect for framing as sets or crafting into footstool covers or throw pillows. For French Country TextilesForest and LinenThere’s nothing quite like breezy natural fabrics to make you want to throw open all the windows and let that country air in while the pie cools. Unfussy and lightweight, the hand-crafted curtains, bedding, and table linens from these Lithuanian textile experts have a classic understated quality that would be right at home in the coziest guest room or most bustling kitchen. Warm, welcoming hues range from marigold yellow to cornflower blue, but soft gingham checkers and timeless French ticking feel especially farm-fresh. Our current favorite? These cherry-striped country cafe curtains. Becky Luigart-Stayner for Country LivingVintage red torchons feel right at home in a farmhouse kitchenFor Rustic RugsOld New HouseWhether or not you’re lucky enough to have gorgeous wide-plank floors, an antique area rug or runner can work wonders for giving a room instant character and warmth. This fifth-generation family-run retailer specializes in importing heirloom hand-knotted carpets dating back to the 1800s, with a focus on traditional designs from the masters in Turkey, India, Persia, and more. Their vast variety of sizes and styles offers something for every aesthetic, with one-of-a-kind patterns ranging from distressed neutrals to chain-stitched florals to ornate arabesques. For Pillows and ProvisionsHabitation BohemeIn true farmhouse fashion, this Indiana shop has curated an enticing blend of handcrafted and vintage homewares that work effortlessly well together. A line of cozy hand-stitched linen pillow coverssits prettily alongside a mix of found objects, from patinated brass candlesticks and etched cloisonné vases to sturdy stoneware crockery and woven wicker baskets. For Elegant Everyday DishwareConvivial ProductionSimple, yet undeniably stunning, the handcrafted dinnerware from this Missouri-based ceramist is designed with durability in mind. Produced in a single, time-tested shade of ivory white glaze, these practical stoneware cups, bowls, and plates make the perfect place settings for lively farm-to-table feasts with friends and family. Beautifully balancing softness and heft, each dish is meant to feel comfortable when being held and passed, but also to look attractive when stacked upon open shelving. For English Country Antiques1100 West Co.This Illinois antiques shop is stocked with all manner of versatile vintage vessels culled from the English countryside, from massive stoneware crocks to charming little escargot pots. Their collection of neutral containers can be adapted for nearly any provincial purpose, but we especially love their assortment of old advertising—from toothpaste pots to marmalade jars and ginger beer bottles galore—for a nice little nod to the quintessential country practice of repurposing what you’ve got. Brian Woodcock/Country LivingPretty English ironstone will always have our heart.For a Cozy GlowOlde Brick LightingConstructed by hand from cord to shade, the vintage-inspired lighting produced by this Pennsylvania retailer is a tribute to the iconic quality and character of old American fixtures. Nostalgic design elements include hand-blown glassand finishes ranging from matte black to brushed nickel and antique brass. To create an authentic farmhouse ambiance, check out their gooseneck sconces, enameled red and blue barn lights, and milky white striped schoolhouse flush mounts. For Enduring ArtifactsThrough the PortholeThe weathered, artisan-made wares curated by this California husband-and-wife duo have been hand-selected from around the globe for their time-etched character. From gorgeous gray-black terracotta vases and rust-colored Turkish clay pots to patinated brass cow bells and rustic reclaimed elm stools, each item is a testament to the lasting beauty of classic materials, with storied sun-bleaching and scratches befitting the most beloved, lived-in rooms. For Winsome Wall ArtEugenia Ciotola ArtThrough graceful brushstrokes and textural swirls of paint, Maryland-based artist Eugenia Ciotola has captured the natural joy of a life that’s simple and sweet. Her pieces celebrate quiet scenes of bucolic beauty, from billowing bouquets of peonies to stoic red barns sitting in fields of wavy green. For a parlor gallery or gathering space, we gravitate toward her original oils on canvas—an impasto still life, perhaps, or a plainly frocked maiden carrying a bountiful bowl of lemons—while her stately farm animal portraitswould look lovely in a child’s nursery.For Time-Tested Storage SolutionsMaterials DivisionFunction is forefront for this farmhouse supplier operating out of New York, whose specialized selection of vintage provisions have lived out dutiful lives of purpose. Standouts include a curated offering of trusty antique tool boxes and sturdy steel-clad trunks whose rugged patina tells the story of many-a household project. Meanwhile, a hardworking mix of industrial wire and woven wood gathering baskets sits handsomely alongside heavy-duty galvanized garbage bins and antique fireplace andirons.For Pastoral PrimitivesComfort Work RoomFull of history and heritage, the old, hand-fabricated furnishings and primitive wooden tools in this unique Ukrainian antique shop are rural remnants of simpler times gone by. Quaint kitchen staples like chippy chiseled spoons, scoops, and cutting boards make an accessible entry point for the casual collector, while scuffed up dough troughs, butter churns, washboards, and barrels are highly desirable conversation pieces for any antique enthusiast who’s dedicated to authentic detail. Becky Luigart-Stayner for Country LivingAntique washboards make for on-theme wall art in a laundry roomFor Heirloom-Quality CoverletsBluegrass QuiltsNo layered farmhouse look would be complete without the homey, tactile touch of a hand-pieced quilt or two draped intentionally about the room. From harvest-hued sawtooth stars to playful patchwork pinwheels, each exquisite blanket from this Kentucky-based artisan is slow-crafted in traditional fashion from 100% cotton materials, and can even be custom stitched from scratch to match your personal color palette and decorative purpose. For a classic country aesthetic, try a log cabin, double diamond, or star patch pattern. For Hand-Crafted GiftsSelselaFeaturing a busy barnyard’s worth of plucky chickens, cuddly sheep, and happy little Holstein cows, this Illinois woodworker’s whimsical line of farm figurines and other giftable goodiesis chock-full of hand-carved charm. Crafted from 100% recycled birch and painted in loving detail, each creature has a deliberately rough-hewn look and feel worthy of any cozy and collected home. For Open-Concept CabinetryFolkhausA hallmark of many modern farmhouses, open-concept shelving has become a stylish way to show that the practical wares you use everyday are the same ones you’re proud to put on display. With their signature line of bracketed wall shelves, Shaker-style peg shelves, and raw steel kitchen rails, the team at Folkhaus has created a range of open storage solutions that beautifully balances elevated design and rustic utility. Rounding out their collection is a selection of open-shelved accent pieces like bookcases, benches, and console tables—each crafted from character-rich kiln-dried timber and finished in your choice of stain.Related StoryFor Antique Farmhouse FurnitureCottage Treasures LVThe foundation of a well-furnished farmhouse often begins with a single prized piece. Whether it’s a slant-front desk, a primitive jelly cabinet, or a punched-tin pie safe, this established New York-based dealer has a knack for sourcing vintage treasures with the personality and presence to anchor an entire space. Distressed cupboards and cabinets may be their bread and butterbut you’ll also find a robust roundup of weathered farm tables, Windsor chairs, and blanket chests—and currently, even a rare 1500s English bench. For Lively Table LinensMoontea StudioAs any devotee of slow decorating knows, sometimes it’s the little details that really bring a look home. For a spot of cheer along with your afternoon tea, we love the hand-stamped table linens from this Washington-based printmaker, which put a peppy, modern spin on farm-fresh produce. Patterned with lush illustrations of bright red tomatoes, crisp green apples, and golden sunflowers—then neatly finished with a color-coordinated hand-stitched trim—each tea towel, placemat, and napkin pays homage to the hours we spend doting over our gardens. For Traditional TransferwarePrior TimeThere’s lots to love about this Massachusetts antiques shop, which admittedly skews slightly cottagecorebut the standout, for us, is the seller’s superior selection of dinner and serving ware. In addition to a lovely lot of mottled white ironstone platters and pitchers, you’ll find a curated mix of Ridgeway and Wedgwood transferware dishes in not only classic cobalt blue, but beautiful browns, greens, and purples, too.Becky Luigart-Stayner for Country LivingPretty brown transferware could be yours with one quick "add to cart."For Folk Art for Your FloorsKinFolk ArtworkDesigned by a West Virginia watercolor and oils artist with a penchant for painting the past, these silky chenille floor mats feature an original cast of colonial characters and folksy scenes modeled after heirloom textiles from the 18th and 19th centuries. Expect lots of early American and patriotic motifs, including old-fashioned flags, Pennsylvania Dutch fraktur, equestrian vignettes, and colonial house samplers—each made to mimic a vintage hooked rug for that cozy, homespun feeling.For Historical ReproductionsSchooner Bay Co.Even in the most painstakingly appointed interior, buying antique originals isn’t always an option. And that’s where this trusted Pennsylvania-based retailer for historical reproductions comes in. Offering a colossal collection of framed art prints, decorative trays, and brass objects, these connoisseurs of the classics have decor for every old-timey aesthetic, whether it’s fox hunt prints for your cabin, Dutch landscapes for your cottage, or primitive animal portraits for your farmstead.For General Store StaplesFarmhouse EclecticsHand-plucked from New England antique shops, estate sales, and auctions, the salvaged sundries from this Massachusetts-based supplierare the type you might spy in an old country store—wooden crates emblazoned with the names of local dairies, antique apple baskets, seed displays, signs, and scales. Whether you’re setting up your farmstand or styling your entryway, you’ll have plenty of storage options and authentic accents to pick from here. Becky Luigart-Stayner for Country LivingSo many food scales, so little time.Related StoriesJackie BuddieJackie Buddie is a freelance writer with more than a decade of editorial experience covering lifestyle topics including home decor how-tos, fashion trend deep dives, seasonal gift guides, and in-depth profiles of artists and creatives around the globe. She holds a degree in journalism from the University of North Carolina at Chapel Hill and received her M.F.A. in creative writing from Boston University. Jackie is, among other things, a collector of curiosities, Catskills land caretaker, dabbling DIYer, day hiker, and mom. She lives in the hills of Bovina, New York, with her family and her sweet-as-pie rescue dog. #best #hiddengem #etsy #shops #fans
    WWW.COUNTRYLIVING.COM
    The Best Hidden-Gem Etsy Shops for Fans of Farmhouse Style
    Becky Luigart-Stayner for Country LivingCountry Living editors select each product featured. If you buy from a link, we may earn a commission. Why Trust Us?Like a well-made quilt, a classic farmhouse aesthetic comes together gradually—a little bit of this, a touch of that. Each addition is purposeful and personal—and isn’t that what home is all about, really? If this type of slowed-down style speaks to you, you're probably already well aware that Etsy is a treasure trove of finds both new and old to fit your timeless farmhouse aesthetic. But with more than eight million active sellers on its marketplace, sometimes the possibilities—vintage feed sacks! primitive pie safes! galvanized grain scoops!—can quickly go from enticing to overwhelming.To better guide your search for the finest farmhouse furnishings, we’ve gathered a go-to list of editor-and designer-beloved Etsy shops which, time and again, turn out hardworking, homespun pieces of heirloom quality. From beautiful antique bureaus to hand-block-printed table linens, the character-rich wares from these sellers will help you design the farmhouse of your dreams, piece by precious piece. Related Stories For Antique AmericanaAcorn and Alice Every good old-fashioned farmhouse could use some traditional Americana to set the tone, and this Pennsylvania salvage shop offers rustic touches loaded with authentic antique allure. Aged wooden wares abound (think vintage milk crates, orchard fruit baskets, and berry boxes), as well as a grab bag of cotton and burlap feed sacks, perfect for framing as sets or crafting into footstool covers or throw pillows. For French Country TextilesForest and LinenThere’s nothing quite like breezy natural fabrics to make you want to throw open all the windows and let that country air in while the pie cools. Unfussy and lightweight, the hand-crafted curtains, bedding, and table linens from these Lithuanian textile experts have a classic understated quality that would be right at home in the coziest guest room or most bustling kitchen. Warm, welcoming hues range from marigold yellow to cornflower blue, but soft gingham checkers and timeless French ticking feel especially farm-fresh. Our current favorite? These cherry-striped country cafe curtains. Becky Luigart-Stayner for Country LivingVintage red torchons feel right at home in a farmhouse kitchenFor Rustic RugsOld New HouseWhether or not you’re lucky enough to have gorgeous wide-plank floors, an antique area rug or runner can work wonders for giving a room instant character and warmth. This fifth-generation family-run retailer specializes in importing heirloom hand-knotted carpets dating back to the 1800s, with a focus on traditional designs from the masters in Turkey, India, Persia, and more. Their vast variety of sizes and styles offers something for every aesthetic, with one-of-a-kind patterns ranging from distressed neutrals to chain-stitched florals to ornate arabesques. For Pillows and ProvisionsHabitation BohemeIn true farmhouse fashion, this Indiana shop has curated an enticing blend of handcrafted and vintage homewares that work effortlessly well together. A line of cozy hand-stitched linen pillow covers (patterned with everything from block-printed blossoms to provincial pinstripes) sits prettily alongside a mix of found objects, from patinated brass candlesticks and etched cloisonné vases to sturdy stoneware crockery and woven wicker baskets. For Elegant Everyday DishwareConvivial ProductionSimple, yet undeniably stunning, the handcrafted dinnerware from this Missouri-based ceramist is designed with durability in mind. Produced in a single, time-tested shade of ivory white glaze, these practical stoneware cups, bowls, and plates make the perfect place settings for lively farm-to-table feasts with friends and family. Beautifully balancing softness and heft, each dish is meant to feel comfortable when being held and passed, but also to look attractive when stacked upon open shelving. For English Country Antiques1100 West Co.This Illinois antiques shop is stocked with all manner of versatile vintage vessels culled from the English countryside, from massive stoneware crocks to charming little escargot pots. Their collection of neutral containers can be adapted for nearly any provincial purpose (envision white ironstone pitchers piled high with fresh-picked hyacinths, or glass canning jars holding your harvest grains), but we especially love their assortment of old advertising—from toothpaste pots to marmalade jars and ginger beer bottles galore—for a nice little nod to the quintessential country practice of repurposing what you’ve got. Brian Woodcock/Country LivingPretty English ironstone will always have our heart.For a Cozy GlowOlde Brick LightingConstructed by hand from cord to shade, the vintage-inspired lighting produced by this Pennsylvania retailer is a tribute to the iconic quality and character of old American fixtures. Nostalgic design elements include hand-blown glass (crafted using cast-iron molds from over 80 years ago) and finishes ranging from matte black to brushed nickel and antique brass. To create an authentic farmhouse ambiance, check out their gooseneck sconces, enameled red and blue barn lights, and milky white striped schoolhouse flush mounts. For Enduring ArtifactsThrough the PortholeThe weathered, artisan-made wares curated by this California husband-and-wife duo have been hand-selected from around the globe for their time-etched character. From gorgeous gray-black terracotta vases and rust-colored Turkish clay pots to patinated brass cow bells and rustic reclaimed elm stools, each item is a testament to the lasting beauty of classic materials, with storied sun-bleaching and scratches befitting the most beloved, lived-in rooms. For Winsome Wall ArtEugenia Ciotola ArtThrough graceful brushstrokes and textural swirls of paint, Maryland-based artist Eugenia Ciotola has captured the natural joy of a life that’s simple and sweet. Her pieces celebrate quiet scenes of bucolic beauty, from billowing bouquets of peonies to stoic red barns sitting in fields of wavy green. For a parlor gallery or gathering space, we gravitate toward her original oils on canvas—an impasto still life, perhaps, or a plainly frocked maiden carrying a bountiful bowl of lemons—while her stately farm animal portraits (regal roosters! ruff collared geese!) would look lovely in a child’s nursery.For Time-Tested Storage SolutionsMaterials DivisionFunction is forefront for this farmhouse supplier operating out of New York, whose specialized selection of vintage provisions have lived out dutiful lives of purpose. Standouts include a curated offering of trusty antique tool boxes and sturdy steel-clad trunks whose rugged patina tells the story of many-a household project. Meanwhile, a hardworking mix of industrial wire and woven wood gathering baskets sits handsomely alongside heavy-duty galvanized garbage bins and antique fireplace andirons.For Pastoral PrimitivesComfort Work RoomFull of history and heritage, the old, hand-fabricated furnishings and primitive wooden tools in this unique Ukrainian antique shop are rural remnants of simpler times gone by. Quaint kitchen staples like chippy chiseled spoons, scoops, and cutting boards make an accessible entry point for the casual collector, while scuffed up dough troughs, butter churns, washboards, and barrels are highly desirable conversation pieces for any antique enthusiast who’s dedicated to authentic detail. Becky Luigart-Stayner for Country LivingAntique washboards make for on-theme wall art in a laundry roomFor Heirloom-Quality CoverletsBluegrass QuiltsNo layered farmhouse look would be complete without the homey, tactile touch of a hand-pieced quilt or two draped intentionally about the room. From harvest-hued sawtooth stars to playful patchwork pinwheels, each exquisite blanket from this Kentucky-based artisan is slow-crafted in traditional fashion from 100% cotton materials, and can even be custom stitched from scratch to match your personal color palette and decorative purpose. For a classic country aesthetic, try a log cabin, double diamond, or star patch pattern. For Hand-Crafted GiftsSelselaFeaturing a busy barnyard’s worth of plucky chickens, cuddly sheep, and happy little Holstein cows, this Illinois woodworker’s whimsical line of farm figurines and other giftable goodies (think animal wine stoppers, keychains, fridge magnets, and cake toppers) is chock-full of hand-carved charm. Crafted from 100% recycled birch and painted in loving detail, each creature has a deliberately rough-hewn look and feel worthy of any cozy and collected home. For Open-Concept CabinetryFolkhausA hallmark of many modern farmhouses, open-concept shelving has become a stylish way to show that the practical wares you use everyday are the same ones you’re proud to put on display. With their signature line of bracketed wall shelves, Shaker-style peg shelves, and raw steel kitchen rails, the team at Folkhaus has created a range of open storage solutions that beautifully balances elevated design and rustic utility. Rounding out their collection is a selection of open-shelved accent pieces like bookcases, benches, and console tables—each crafted from character-rich kiln-dried timber and finished in your choice of stain.Related StoryFor Antique Farmhouse FurnitureCottage Treasures LVThe foundation of a well-furnished farmhouse often begins with a single prized piece. Whether it’s a slant-front desk, a primitive jelly cabinet, or a punched-tin pie safe, this established New York-based dealer has a knack for sourcing vintage treasures with the personality and presence to anchor an entire space. Distressed cupboards and cabinets may be their bread and butter (just look at this two-piece pine hutch!) but you’ll also find a robust roundup of weathered farm tables, Windsor chairs, and blanket chests—and currently, even a rare 1500s English bench. For Lively Table LinensMoontea StudioAs any devotee of slow decorating knows, sometimes it’s the little details that really bring a look home. For a spot of cheer along with your afternoon tea, we love the hand-stamped table linens from this Washington-based printmaker, which put a peppy, modern spin on farm-fresh produce. Patterned with lush illustrations of bright red tomatoes, crisp green apples, and golden sunflowers—then neatly finished with a color-coordinated hand-stitched trim—each tea towel, placemat, and napkin pays homage to the hours we spend doting over our gardens. For Traditional TransferwarePrior TimeThere’s lots to love about this Massachusetts antiques shop, which admittedly skews slightly cottagecore (the pink Baccarat perfume bottles! the hobnail milk glass vases! the huge primitive bread boards!) but the standout, for us, is the seller’s superior selection of dinner and serving ware. In addition to a lovely lot of mottled white ironstone platters and pitchers, you’ll find a curated mix of Ridgeway and Wedgwood transferware dishes in not only classic cobalt blue, but beautiful browns, greens, and purples, too.Becky Luigart-Stayner for Country LivingPretty brown transferware could be yours with one quick "add to cart."For Folk Art for Your FloorsKinFolk ArtworkDesigned by a West Virginia watercolor and oils artist with a penchant for painting the past, these silky chenille floor mats feature an original cast of colonial characters and folksy scenes modeled after heirloom textiles from the 18th and 19th centuries. Expect lots of early American and patriotic motifs, including old-fashioned flags, Pennsylvania Dutch fraktur, equestrian vignettes, and colonial house samplers—each made to mimic a vintage hooked rug for that cozy, homespun feeling. (We have to admit, the folk art-inspired cow and chicken is our favorite.)For Historical ReproductionsSchooner Bay Co.Even in the most painstakingly appointed interior, buying antique originals isn’t always an option (don’t ask how many times we’ve been outbid at an estate auction). And that’s where this trusted Pennsylvania-based retailer for historical reproductions comes in. Offering a colossal collection of framed art prints, decorative trays, and brass objects (think magnifying glasses, compasses, paperweights, and letter openers), these connoisseurs of the classics have decor for every old-timey aesthetic, whether it’s fox hunt prints for your cabin, Dutch landscapes for your cottage, or primitive animal portraits for your farmstead.For General Store StaplesFarmhouse EclecticsHand-plucked from New England antique shops, estate sales, and auctions, the salvaged sundries from this Massachusetts-based supplier (who grew up in an 1850s farmhouse himself) are the type you might spy in an old country store—wooden crates emblazoned with the names of local dairies, antique apple baskets, seed displays, signs, and scales. Whether you’re setting up your farmstand or styling your entryway, you’ll have plenty of storage options and authentic accents to pick from here. Becky Luigart-Stayner for Country LivingSo many food scales, so little time.Related StoriesJackie BuddieJackie Buddie is a freelance writer with more than a decade of editorial experience covering lifestyle topics including home decor how-tos, fashion trend deep dives, seasonal gift guides, and in-depth profiles of artists and creatives around the globe. She holds a degree in journalism from the University of North Carolina at Chapel Hill and received her M.F.A. in creative writing from Boston University. Jackie is, among other things, a collector of curiosities, Catskills land caretaker, dabbling DIYer, day hiker, and mom. She lives in the hills of Bovina, New York, with her family and her sweet-as-pie rescue dog.
    Like
    Love
    Wow
    Sad
    Angry
    603
    0 Commentarios 0 Acciones
  • How to choose a programmatic video advertising platform: 8 considerations

    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    #how #choose #programmatic #video #advertising
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love. #how #choose #programmatic #video #advertising
    UNITY.COM
    How to choose a programmatic video advertising platform: 8 considerations
    Whether you’re an advertiser or a publisher, partnering up with the right programmatic video advertising platform is one of the most important business decisions you can make. More than half of U.S. marketing budgets are now devoted to programmatically purchased media, and there’s no indication that trend will reverse any time soon.Everybody wants to find the solution that’s best for their bottom line. However, the specific considerations that should go into choosing the right video programmatic advertising solution differ depending on whether you have supply to sell or are looking for an audience for your advertisements. This article will break down key factors for both mobile advertisers and mobile publishers to keep in mind as they search for a programmatic video advertising platform.Before we get into the specifics on either end, let’s recap the basic concepts.What is a programmatic video advertising platform?A programmatic video advertising platform combines tools, processes, and marketplaces to place video ads from advertising partners in ad placements furnished by publishing partners. The “programmatic” part of the term means that it’s all done procedurally via automated tools, integrating with demand side platforms and supply side platforms to allow advertising placements to be bid upon, selected, and displayed in fractions of a second.If a mobile game has ever offered you extra rewards for watching a video and you found yourself watching an ad for a related game a split second later, you’ve likely been on the user side of an advertising programmatic transaction. Now let’s take a look at what considerations make for the ideal programmatic video advertising platform for the other two main parties involved.4 points to help advertisers choose the best programmatic platformLooking for the best way to leverage your video demand side platform? These are four key points for advertisers to consider when trying to find the right programmatic video advertising platform.A large, engaged audienceOne of the most important things a programmatic video advertising platform can do for advertisers is put their creative content in front of as many people as possible. However, it’s not enough to just pass your content in front of the most eyeballs. It’s equally important for the platform to give you access to engaged audiences who are more likely to convert so you can make the most of your advertising dollar.Full-screen videos to grab attentionYou need every advantage you can get when you’re grappling for the attention of a busy mobile user. Your video demand side platform should prioritize full-screen takeovers when and where they make sense, making sure your content isn’t just playing unnoticed on the far side of the screen.A range of ad options that are easy to testYour video programmatic advertising partner should be able to offer a broad variety of creative and placement options, including interstitial and rewarded ads. It should also enable you to test, iterate, and optimize ads as soon as they’re put into rotation, ensuring your ad spend is meeting your targets and allowing for fast and flexible changes if needed.Simple access to supplyEven the most powerful programmatic video advertising platform is no good if it’s impractical to get running. Look for partners that allows instant access to supply through tried-and-true platforms like Google Display & Video 360, Magnite, and others. On top of that, you should seek out a private exchange to ensure access to premium inventory.4 points for publishers in search of the best programmatic platformYou work hard to make the best apps for your users, and you deserve to partner up with a programmatic video advertising platform that works hard too. Serving video ads that both keep users engaged and your profits rising can be a tricky needle to thread, but the right platform should make your part of the process simple and effective.A large selection of advertisersEncountering the same ads over and over again can get old fast — and diminish engagement. On top of that, a small selection of advertisers means fewer chances for your users to connect with an ad and convert — which means less revenue, too. The ideal programmatic video advertising platform will partner with thousands of advertisers to fill your placements with fresh, engaging content.Rewarded videos and offerwallsInterstitial video ads aren’t likely to disappear any time soon, but players strongly prefer other means of advertisement. In fact, 76% of US mobile gamers say they prefer rewarded videos over interstitial ads. Giving players the choice of when to watch ads, with the inducement of in-game rewards, can be very powerful — and an offerwall is another powerful way to put the ball in your player’s court.Easy supply-side SDK integrationThe time your developers spend integrating a new video programmatic advertising solution into your apps is time they could have spent making those apps more engaging for users. While any backend adjustment will naturally take some time to implement, your new programmatic partner should offer a powerful, industry-standard SDK to make the process fast and non-disruptive.Support for programmatic mediationMediators such as LevelPlay by ironSource automatically prioritize ad demand from multiple third-party networks, optimizing your cash flow and reducing work on your end. Your programmatic video advertising platform should seamlessly integrate with mediators to make the most of each ad placement, every time.Pick a powerful programmatic partnerThankfully, advertisers and publishers alike can choose one solution that checks all the above boxes and more. For advertisers, the ironSource Programmatic Marketplace will connect you with targeted audiences in thousands of apps that gel with your brand. For publishers, ironSource’s marketplace means a massive selection of ads that your users and your bottom line will love.
    0 Commentarios 0 Acciones
  • Do you think Sony will make support for their rumored new handheld mandatory for developers?

    Red Kong XIX
    Member

    Oct 11, 2020

    13,560

    This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll.
    Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially. 

    Hero_of_the_Day
    Avenger

    Oct 27, 2017

    19,958

    Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.
     

    Homura
    ▲ Legend ▲
    Member

    Aug 20, 2019

    7,232

    As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games.

    Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility. 

    shadowman16
    Member

    Oct 25, 2017

    42,292

    Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds.

    I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing 

    Modest_Modsoul
    Living the Dreams
    Member

    Oct 29, 2017

    28,418


     

    setmymindforopensky
    Member

    Apr 20, 2025

    67

    a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough.

    im guessing PSTV situation. support it or not we dont care. 

    reksveks
    Member

    May 17, 2022

    7,628

    Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it.

    It's going to be an interesting solution to an interesting problem 

    Servbot24
    The Fallen

    Oct 25, 2017

    47,826

    Obviously not. Pretty absurd question tbh.
     

    RivalGT
    Member

    Dec 13, 2017

    7,616

    This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work.

    Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode.

    Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think. 

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    shadowman16 said:

    Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds.

    I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thingClick to expand...
    Click to shrink...

    depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue
     

    Pheonix1
    Member

    Jun 22, 2024

    716

    Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.
     

    skeezx
    Member

    Oct 27, 2017

    23,994

    guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly.

    i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all 

    AmFreak
    Member

    Oct 26, 2017

    3,245

    They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success.

    The best way to do this is to make it another SKU of the contemporary console. And witheverything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past. 

    Ruck
    Member

    Oct 25, 2017

    3,105

    I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no
     

    TitanicFall
    Member

    Nov 12, 2017

    9,340

    Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.
     

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise.

    PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs 

    bleits
    Member

    Oct 14, 2023

    373

    They have to if they want to be taken seriously
     

    Vic Damone Jr.
    Member

    Oct 27, 2017

    20,534

    Nope Sony doesn't mandate this stuff and it's why their second product always dies.
     

    fiendcode
    Member

    Oct 26, 2017

    26,514

    I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely. Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.
     

    Callibretto
    Member

    Oct 25, 2017

    10,445

    Indonesia

    bleits said:

    They have to if they want to be taken seriously

    Click to expand...
    Click to shrink...

    from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally.

    PS consider their high fidelity visual as advantage and differentiator from Nintendo.

    so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices.

    so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo 

    danm999
    Member

    Oct 29, 2017

    19,929

    Sydney

    Incentives, not mandates.
     

    NSESN
    ▲ Legend ▲
    Member

    Oct 25, 2017

    27,729

    I think people are setting themselves for disappointment in regards for how powerful this thing will be
     

    defaltoption
    Plug in a controller and enter the Konami code
    The Fallen

    Oct 27, 2017

    12,485

    Austin

    Depends on what they call it.

    If they call it anything related to ps6, expect very bad performance, and mandates

    If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end

    If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it. 

    Metnut
    Member

    Apr 7, 2025

    30

    Good question OP.

    I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches. 

    mute
    ▲ Legend ▲
    Member

    Oct 25, 2017

    29,807

    I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.
     

    Patison
    Member

    Oct 27, 2017

    761

    It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch routeor more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all.

    Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc.

    And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around. 

    Jammerz
    Member

    Apr 29, 2023

    1,579

    I think it will be optional support.

    However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down. 

    Hamchan
    The Fallen

    Oct 25, 2017

    6,000

    I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.
     

    Advance.Wars.Sgt.
    Member

    Jun 10, 2018

    10,456

    Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.
     

    overthewaves
    Member

    Sep 30, 2020

    1,203

    Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.
     

    Neonvisions
    Member

    Oct 27, 2017

    707

    overthewaves said:

    Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.

    Click to expand...
    Click to shrink...

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? 

    Gwarm
    Member

    Nov 13, 2017

    2,902

    I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.
     

    bloopland33
    Member

    Mar 4, 2020

    3,845

    I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery.

    This would be in addition to whatever efforts they're doing to make things work out of the box, of course.

    But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now…

    ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything. So maybe they Steam Deck it 

    vivftp
    Member

    Oct 29, 2017

    23,016

    My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs

    I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation 

    Mocha Joe
    Member

    Jun 2, 2021

    13,636

    Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.
     

    overthewaves
    Member

    Sep 30, 2020

    1,203

    Neonvisions said:

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?

    Click to expand...
    Click to shrink...

    I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".
     

    reksveks
    Member

    May 17, 2022

    7,628

    Neonvisions said:

    How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?

    Click to expand...
    Click to shrink...

    Or the perception is that it does but the truth is that there is a lot of factors
     

    Fabs
    Member

    Aug 22, 2019

    2,827

    I can't see the forcing handheld and pro support next gen.
     

    level
    Member

    May 25, 2023

    1,427

    Definitely not

    Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers. 

    gofreak
    Member

    Oct 26, 2017

    8,411

    I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.
     

    Caiusto
    Member

    Oct 25, 2017

    7,086

    If they don't want to end up with another Vita yes they will.
     

    mute
    ▲ Legend ▲
    Member

    Oct 25, 2017

    29,807

    Advance.Wars.Sgt. said:

    Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.

    Click to expand...
    Click to shrink...

    It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.
     

    AmFreak
    Member

    Oct 26, 2017

    3,245

    mute said:

    It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.

    Click to expand...
    Click to shrink...

    Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".
     

    Spoit
    Member

    Oct 28, 2017

    5,599

    Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5.

    I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching. 

    Jawmuncher
    Crisis Dino
    Moderator

    Oct 25, 2017

    45,166

    Ibis Island

    No, I think the portable will handle portable stuff "automatically" for what it converts
     

    knightmawk
    Member

    Dec 12, 2018

    8,900

    I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back.

    That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception. 

    RivalGT
    Member

    Dec 13, 2017

    7,616

    I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap.

    What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work. 

    Vexii
    Member

    Oct 31, 2017

    3,103

    UK

    if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists
     

    Mobius and Pet Octopus
    Member

    Oct 25, 2017

    17,065

    Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.
     

    SeanMN
    Member

    Oct 28, 2017

    2,437

    If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support.

    If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku.

    My thoughts are this should be a PS6 and support the same as the primary console. 
    #you #think #sony #will #make
    Do you think Sony will make support for their rumored new handheld mandatory for developers?
    Red Kong XIX Member Oct 11, 2020 13,560 This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll. Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially.  Hero_of_the_Day Avenger Oct 27, 2017 19,958 Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.   Homura ▲ Legend ▲ Member Aug 20, 2019 7,232 As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games. Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility.  shadowman16 Member Oct 25, 2017 42,292 Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds. I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing  Modest_Modsoul Living the Dreams Member Oct 29, 2017 28,418 🤷‍♂️   setmymindforopensky Member Apr 20, 2025 67 a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough. im guessing PSTV situation. support it or not we dont care.  reksveks Member May 17, 2022 7,628 Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it. It's going to be an interesting solution to an interesting problem  Servbot24 The Fallen Oct 25, 2017 47,826 Obviously not. Pretty absurd question tbh.   RivalGT Member Dec 13, 2017 7,616 This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work. Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode. Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think.  Callibretto Member Oct 25, 2017 10,445 Indonesia shadowman16 said: Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds. I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thingClick to expand... Click to shrink... depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue   Pheonix1 Member Jun 22, 2024 716 Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.   skeezx Member Oct 27, 2017 23,994 guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly. i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all  AmFreak Member Oct 26, 2017 3,245 They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success. The best way to do this is to make it another SKU of the contemporary console. And witheverything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past.  Ruck Member Oct 25, 2017 3,105 I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no   TitanicFall Member Nov 12, 2017 9,340 Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.   Callibretto Member Oct 25, 2017 10,445 Indonesia imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise. PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs  bleits Member Oct 14, 2023 373 They have to if they want to be taken seriously   Vic Damone Jr. Member Oct 27, 2017 20,534 Nope Sony doesn't mandate this stuff and it's why their second product always dies.   fiendcode Member Oct 26, 2017 26,514 I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely. Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.   Callibretto Member Oct 25, 2017 10,445 Indonesia bleits said: They have to if they want to be taken seriously Click to expand... Click to shrink... from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally. PS consider their high fidelity visual as advantage and differentiator from Nintendo. so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices. so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo  danm999 Member Oct 29, 2017 19,929 Sydney Incentives, not mandates.   NSESN ▲ Legend ▲ Member Oct 25, 2017 27,729 I think people are setting themselves for disappointment in regards for how powerful this thing will be   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin Depends on what they call it. If they call it anything related to ps6, expect very bad performance, and mandates If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it.  Metnut Member Apr 7, 2025 30 Good question OP. I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches.  mute ▲ Legend ▲ Member Oct 25, 2017 29,807 I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.   Patison Member Oct 27, 2017 761 It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch routeor more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all. Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc. And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around.  Jammerz Member Apr 29, 2023 1,579 I think it will be optional support. However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down.  Hamchan The Fallen Oct 25, 2017 6,000 I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.   Advance.Wars.Sgt. Member Jun 10, 2018 10,456 Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.   overthewaves Member Sep 30, 2020 1,203 Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.   Neonvisions Member Oct 27, 2017 707 overthewaves said: Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag. Click to expand... Click to shrink... How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?  Gwarm Member Nov 13, 2017 2,902 I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.   bloopland33 Member Mar 4, 2020 3,845 I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery. This would be in addition to whatever efforts they're doing to make things work out of the box, of course. But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now… ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything. So maybe they Steam Deck it  vivftp Member Oct 29, 2017 23,016 My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation  Mocha Joe Member Jun 2, 2021 13,636 Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.   overthewaves Member Sep 30, 2020 1,203 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".   reksveks Member May 17, 2022 7,628 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... Or the perception is that it does but the truth is that there is a lot of factors   Fabs Member Aug 22, 2019 2,827 I can't see the forcing handheld and pro support next gen.   level Member May 25, 2023 1,427 Definitely not Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers.  gofreak Member Oct 26, 2017 8,411 I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.   Caiusto Member Oct 25, 2017 7,086 If they don't want to end up with another Vita yes they will.   mute ▲ Legend ▲ Member Oct 25, 2017 29,807 Advance.Wars.Sgt. said: Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind. Click to expand... Click to shrink... It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.   AmFreak Member Oct 26, 2017 3,245 mute said: It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example. Click to expand... Click to shrink... Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".   Spoit Member Oct 28, 2017 5,599 Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5. I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 45,166 Ibis Island No, I think the portable will handle portable stuff "automatically" for what it converts   knightmawk Member Dec 12, 2018 8,900 I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back. That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception.  RivalGT Member Dec 13, 2017 7,616 I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap. What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work.  Vexii Member Oct 31, 2017 3,103 UK if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists   Mobius and Pet Octopus Member Oct 25, 2017 17,065 Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.   SeanMN Member Oct 28, 2017 2,437 If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support. If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku. My thoughts are this should be a PS6 and support the same as the primary console.  #you #think #sony #will #make
    WWW.RESETERA.COM
    Do you think Sony will make support for their rumored new handheld mandatory for developers?
    Red Kong XIX Member Oct 11, 2020 13,560 This is assuming that the handheld can play PS4 games natively without any issues, so they are not included in the poll. Hardware leaker Kepler said it should be able to run PS5 games, even without a patch, but with a performance impact potentially.  Hero_of_the_Day Avenger Oct 27, 2017 19,958 Isn't the rumor that games don't require patches to run on it? That would imply that support isn't mandatory, but automatic.   Homura ▲ Legend ▲ Member Aug 20, 2019 7,232 As the post above said, the rumor is the PS5 portable will be able to run natively any and all PS4/PS5 games. Of course, some games might not work properly or require specific patches, but the idea is automatic compatibility.  shadowman16 Member Oct 25, 2017 42,292 Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds (which considering how people hated cross gen for that reason, they'd hate it here as well). I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing (considering how shit Sony is at supporting its peripherals - like the Vita or PSVR2)  Modest_Modsoul Living the Dreams Member Oct 29, 2017 28,418 🤷‍♂️   setmymindforopensky Member Apr 20, 2025 67 a lot of games have performance modes. it should run a lot of the library even without any patching. if there's multiplat im sure itll default to the PS4 ver. im not sure what theyd do for something like GTA6 but itll have a series S version so its clearly scalable enough. im guessing PSTV situation. support it or not we dont care.  reksveks Member May 17, 2022 7,628 Think Kepler is personally assuming the goal of running without patches is a goal and one that won't happen just cause it's too late to force it. It's going to be an interesting solution to an interesting problem  Servbot24 The Fallen Oct 25, 2017 47,826 Obviously not. Pretty absurd question tbh.   RivalGT Member Dec 13, 2017 7,616 This one sounds like it requires a lot of work on Sony's end, I dont think developers will need to do much for games to work. Granted moving forward Sony is likely to make it easier for devs to have a more input on this portable mode. Things working out of the box is likely the goal, and thats what Sony needs if they want this to work, but devs having more input on this mode would be a plus I think.  Callibretto Member Oct 25, 2017 10,445 Indonesia shadowman16 said: Ideally you'd want stuff to pretty much work out of the box. The more you ask devs to do, the less I imagine will want to support it... Or suddenly games get parred down so that they can run on handhelds (which considering how people hated cross gen for that reason, they'd hate it here as well). I personally would just prefer a solution where its automatic. I dont really care about a Sony handheld, dont really want devs to be forced to support the thing (considering how shit Sony is at supporting its peripherals - like the Vita or PSVR2) Click to expand... Click to shrink... depend on the game imo, asking CD Project to somehow make Witcher 4 playable on handheld might be unreasonable. but any game that can run on Switch 2 should be playable on PSPortable without much issue   Pheonix1 Member Jun 22, 2024 716 Absolutely they will. Not sure why people think it would be hard, if they hand them.the right tools most ports won't take long anyhow.   skeezx Member Oct 27, 2017 23,994 guessing there will be a "portable approved" label with the respective games going forward, regardless whether it's a PS5 or PS6 game. and when the thing is released popular past titles will be retroactively approved by sony, and up to developers if they want to patch the bigger games to be portable friendly. i guess where things could get tricky/laborious for developers is whether every game going forward is required to screen for portable performance, as it's not a PC so the portable will likely disallow for running "non-approved" games at all  AmFreak Member Oct 26, 2017 3,245 They need to give people some form of guarantee that it will get games, otherwise they greatly diminish their potential success. The best way to do this is to make it another SKU of the contemporary console. And with (close to) everything already running at 60fps and progression slowing to a crawl it's far easier than it had been in the past.  Ruck Member Oct 25, 2017 3,105 I mean, what is the handheld? PS6? Or an actual second console? If the former, then yes, if the latter then no   TitanicFall Member Nov 12, 2017 9,340 Nah. It might be incentivized though. There's not much in it for devs if it's a cross buy situation.   Callibretto Member Oct 25, 2017 10,445 Indonesia imo, PS6 will remain their main console, focusing on high fidelity visuals that Switch 2 and portable PC won't be able to run without huge compromise. PSPortable will be secondary console, something like PSPortal, but this time able to play any games that Switch2 can reasonably run. and for the high end games that it can't run, it will use streaming, either from PS6 you own, or PS+ Premium subs  bleits Member Oct 14, 2023 373 They have to if they want to be taken seriously   Vic Damone Jr. Member Oct 27, 2017 20,534 Nope Sony doesn't mandate this stuff and it's why their second product always dies.   fiendcode Member Oct 26, 2017 26,514 I think it depends on what the device really is, if it's more of a "Portal 2" or a "Series SP" or something else entirely (PSP3?). Streaming might be enough for PS6 games along with incentivized PS5/4 patches but whatever SIE does they need to make sure their inhouse teams are ALL on board this time. That was a big part of PSP/Vita's downfall, that the biggest or most important PS Studios snubbed them and the teams that did show up with support are mostly closed and gone now.   Callibretto Member Oct 25, 2017 10,445 Indonesia bleits said: They have to if they want to be taken seriously Click to expand... Click to shrink... from the last interview with PS exec about Switch 2 spec, it seems clear that PS have no plan to abandon high end console spec to switch to mobile hardware like Switch 2 and Xbox Ally. PS consider their high fidelity visual as advantage and differentiator from Nintendo. so with PS6, their top studio will eventuall make games that just won't realistically run on handheld devices. so having a mandate where all PS6 games is playable on handheld is simply unrealistic imo  danm999 Member Oct 29, 2017 19,929 Sydney Incentives, not mandates.   NSESN ▲ Legend ▲ Member Oct 25, 2017 27,729 I think people are setting themselves for disappointment in regards for how powerful this thing will be   defaltoption Plug in a controller and enter the Konami code The Fallen Oct 27, 2017 12,485 Austin Depends on what they call it. If they call it anything related to ps6, expect very bad performance, and mandates If they call it ps5 portable, expect bad performance and no mandates as it will be handled on their end If they call it a ps portable expect it to have no support from Sony and get whatever it gets just be happy it functions till they abandon it.  Metnut Member Apr 7, 2025 30 Good question OP. I voted the middle one. I think anything that ships for PS5 will need to work for the handheld. Question is whether that works automatically or will need patches.  mute ▲ Legend ▲ Member Oct 25, 2017 29,807 I think that would require a level of commitment to a secondary piece of hardware that Sony hasn't shown in a long time.   Patison Member Oct 27, 2017 761 It's difficult to say without knowing what they're planning with this device exactly. If they're fully going Switch route (or PS Vita/PS TV route) or more like a Steam Deck, which will run launch games perfectly and then, as time goes on, some titles might start looking less than ideal or be unplayable at all. Or Series S/X, just the Series S being portable — that would be preferable but also limiting but also diminishing returns between generations so might be worth it etc. And if that device happens at all and its development won't be dropped soon is another question. Lots of unknowns, but I'm interested to see what Sony comes up with, as long as they'll have games to support it this time around.  Jammerz Member Apr 29, 2023 1,579 I think it will be optional support. However sony needs to support it with their first parties to set an example and making it as easy as possible for other devs to scale down. For sony first party games maybe use nixxes to scale down so their studios aren't bogged down.  Hamchan The Fallen Oct 25, 2017 6,000 I think 99.9% of games will be crossgen between PS5 and PS6 for the entire generation, just based on how this industry is going, so it might not be much of an issue for Sony to mandate.   Advance.Wars.Sgt. Member Jun 10, 2018 10,456 Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind.   overthewaves Member Sep 30, 2020 1,203 Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag.   Neonvisions Member Oct 27, 2017 707 overthewaves said: Wouldn't that hamstring the games for ps6? That's PlayStation players biggest fear they don't want a series S type situation right? They treat series S like a punching bag. Click to expand... Click to shrink... How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X?  Gwarm Member Nov 13, 2017 2,902 I'd be shocked if Sony released a device that let's you play games that haven't been patched or confirmed to run acceptably. Imagine if certain games just hard crashed the console? This is the company that wouldn't let you play certain Vita games on the PSTV even if they actually worked.   bloopland33 Member Mar 4, 2020 3,845 I wonder if they'll just do the Steam Deck thing and do a compatibility badge. You can boot whatever software you want, but it might run at 5 fps and drain your battery. This would be in addition to whatever efforts they're doing to make things work out of the box, of course. But it's hard to imagine them mandating developers ship a PS6 profile and a PS6P profile for those heavier games 5-7 years from now… ….but it's also hard to imagine them shipping this PS6-gen device that doesn't play everything (depending on how they position it). So maybe they Steam Deck it  vivftp Member Oct 29, 2017 23,016 My guess, every PS6 game will be mandated to support it. PS5 games will support it natively for the simpler games and will require a patch as has been rumored to run on lesser specs I think next gen we get PS3 and Vita emulation so the PS6 and portable will be able to play games from PSN from every past PlayStation  Mocha Joe Member Jun 2, 2021 13,636 Really need to take the Steam Deck approach and don't make it a requirement. Just make it a complementary device where it is possible to play majority of the games available on PSN.   overthewaves Member Sep 30, 2020 1,203 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... I mean did you see the reaction here to the series S announcement lol. Everyone was saying it's gonna "hold back the generation".   reksveks Member May 17, 2022 7,628 Neonvisions said: How would that effect PS6? Are you suggesting that the Series S hamstrings games for the X? Click to expand... Click to shrink... Or the perception is that it does but the truth is that there is a lot of factors   Fabs Member Aug 22, 2019 2,827 I can't see the forcing handheld and pro support next gen.   level Member May 25, 2023 1,427 Definitely not Games already take too long to make. Extra time isn't something they'll want to reinforce to their developers.  gofreak Member Oct 26, 2017 8,411 I don't think support will be mandatory. I think they're bringing it into a reality where a growing portion of games can, or could, run without much change or effort on the developer's part on a next gen handheld. They'll lean on that natural trend rather than a policy - anything that is outside of that will just be streamable as now with the Portal.   Caiusto Member Oct 25, 2017 7,086 If they don't want to end up with another Vita yes they will.   mute ▲ Legend ▲ Member Oct 25, 2017 29,807 Advance.Wars.Sgt. said: Honestly, I'd worry more about Sony's 1st party teams than 3rd party developers since they were notoriously adverse making software with a handheld power profile in mind. Click to expand... Click to shrink... It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example.   AmFreak Member Oct 26, 2017 3,245 mute said: It does seem kinda unthinkable that Intergalactic would be made with a handheld in mind, for example. Click to expand... Click to shrink... Ratchet, Returnal, Cyberpunk, etc. also weren't made "with a handheld in mind".   Spoit Member Oct 28, 2017 5,599 Given how much of a pain the series S mandate has been, I don't see them binding even first party studios to it, especially ones that are trying to go for the cutting edge of tech. Since given AMDs timelines, is not going to be anywhere near a base PS5. I'm also skeptical of the claim that'll be able to play ps5 games without extensive patching.  Jawmuncher Crisis Dino Moderator Oct 25, 2017 45,166 Ibis Island No, I think the portable will handle portable stuff "automatically" for what it converts   knightmawk Member Dec 12, 2018 8,900 I expect they'll do everything they can to make sure no one has to think about it and it's as automatic as possible. It'll technically still be part of cert, but the goal will be for it to be rare that a game fails that part of cert and has to be sent back. That being said, I imagine there will be some games that still don't work and developers will be able to submit for that exception.  RivalGT Member Dec 13, 2017 7,616 I think the concept here is similar to how PS4 games play on PS5, the ones with patches I mean, the game will run with a different graphics preset then it would on PS4/ PS4 Pro, so in some cases this means higher resolution or higher frame rate cap. What Sony needs to work on their end is getting this to work without any patches from developers. Its the only way this can work.  Vexii Member Oct 31, 2017 3,103 UK if they don't mandate support, it'll just be a death knell for the format. I don't think they could get away with a dedicated handheld platform now when the Switch and Steam Deck exists   Mobius and Pet Octopus Member Oct 25, 2017 17,065 Just because a game can run on a handheld, doesn't mean that's all required for support. The UI alone likely requires changes for an optimal experience, sometimes necessary to be "playable". Small screen sizes usually needs changes.   SeanMN Member Oct 28, 2017 2,437 If PS6 games support is optional, that will create fragmentation of the platform and uncertain software support. If it's part of the PS6 family and support is mandatory, I can see there being concern that if would hold the generation back with a low capability sku. My thoughts are this should be a PS6 and support the same as the primary console. 
    0 Commentarios 0 Acciones
  • 15 riveting images from the 2025 UN World Oceans Day Photo Competition

    Big and Small Underwater Faces — 3rd Place.
    Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.
     
    Credit: Lars von Ritter Zahony/ World Ocean’s Day

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition.
    Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org
    Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us.
    This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography.
    Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org
    Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org
     Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org
    Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org
    Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org
    Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org
    #riveting #images #world #oceans #dayphoto
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals. Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony/ World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating imageof Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore/ United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide, Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacierin the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection.Credit: Pedro Carrillo/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola/ United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez/ United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannetssoar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kphas they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meterswith the winds up to 30 kph.Credit: Nur Tucker/ United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters, about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus. As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione/ United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnetwas captured in the Sea of Japan, about 50 milessouthwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters, under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik/ United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfishcaptured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa/ United Nations World Oceans Day www.unworldoceansday.org #riveting #images #world #oceans #dayphoto
    WWW.POPSCI.COM
    15 riveting images from the 2025 UN World Oceans Day Photo Competition
    Big and Small Underwater Faces — 3rd Place. Trips to the Antarctic Peninsula always yield amazing encounters with leopard seals (Hydrurga leptonyx). Boldly approaching me and baring his teeth, this individual was keen to point out that this part of Antarctica was his territory. This picture was shot at dusk, resulting in the rather moody atmosphere.   Credit: Lars von Ritter Zahony (Germany) / World Ocean’s Day Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. The striking eye of a humpback whale named Sweet Girl peers at the camera. Just four days later, she would be dead, hit by a speeding boat and one of the 20,000 whales killed by ship strikes each year. Photographer Rachel Moore’s captivating image (seen below) of Sweet Girl earned top honors at the 2025 United Nations World Oceans Day Photo Competition. Wonder: Sustaining What Sustains Us — WinnerThis photo, taken in Mo’orea, French Polynesia in 2024, captures the eye of a humpback whale named Sweet Girl, just days before her tragic death. Four days after I captured this intimate moment, she was struck and killed by a fast-moving ship. Her death serves as a heartbreaking reminder of the 20,000 whales lost to ship strikes every year. We are using her story to advocate for stronger protections, petitioning for stricter speed laws around Tahiti and Mo’orea during whale season. I hope Sweet Girl’s legacy will spark real change to protect these incredible animals and prevent further senseless loss.Credit: Rachel Moore (USA) / United Nations World Oceans Day www.unworldoceansday.org Now in its twelfth year, the competition coordinated in collaboration between the UN Division for Ocean Affairs and the Law of the Sea, DivePhotoGuide (DPG), Oceanic Global, and  the Intergovernmental Oceanographic Commission of UNESCO. Each year, thousands of underwater photographers submit images that judges award prizes for across four categories: Big and Small Underwater Faces, Underwater Seascapes, Above Water Seascapes, and Wonder: Sustaining What Sustains Us. This year’s winning images include a curious leopard seal, a swarm of jellyfish, and a very grumpy looking Japanese warbonnet. Given our oceans’ perilous state, all competition participants were required to sign a charter of 14 commitments regarding ethics in photography. Underwater Seascapes — Honorable MentionWith only orcas as their natural predators, leopard seals are Antarctica’s most versatile hunters, preying on everything from fish and cephalopods to penguins and other seals. Gentoo penguins are a favored menu item, and leopard seals can be observed patrolling the waters around their colonies. For this shot, I used a split image to capture both worlds: the gentoo penguin colony in the background with the leopard seal on the hunt in the foreground.Credit: Lars von Ritter Zahony (Germany) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes – WinnerA serene lake cradled by arid dunes, where a gentle stream breathes life into the heart of Mother Earth’s creation: Captured from an airplane, this image reveals the powerful contrasts and hidden beauty where land and ocean meet, reminding us that the ocean is the source of all life and that everything in nature is deeply connected. The location is a remote stretch of coastline near Shark Bay, Western Australia.Credit: Leander Nardin (Austria) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — 3rd PlaceParadise Harbour is one of the most beautiful places on the Antarctic Peninsula. When I visited, the sea was extremely calm, and I was lucky enough to witness a wonderfully clear reflection of the Suárez Glacier (aka Petzval Glacier) in the water. The only problem was the waves created by our speedboat, and the only way to capture the perfect reflection was to lie on the bottom of the boat while it moved towards the glacier.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 3rd Place“La Rapadura” is a natural hidden treasure on the northern coast of Tenerife, in the Spanish territory of the Canary Islands. Only discovered in 1996, it is one of the most astonishing underwater landscapes in the world, consistently ranking among the planet’s best dive sites. These towering columns of basalt are the result of volcanic processes that occurred between 500,000 and a million years ago. The formation was created when a basaltic lava flow reached the ocean, where, upon cooling and solidifying, it contracted, creating natural structures often compared to the pipes of church organs. Located in a region where marine life has been impacted by once common illegal fishing practices, this stunning natural monument has both geological and ecological value, and scientists and underwater photographers are advocating for its protection. (Model: Yolanda Garcia)Credit: Pedro Carrillo (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — WinnerThis year, I had the incredible opportunity to visit a jellyfish lake during a liveaboard trip around southern Raja Ampat, Indonesia. Being surrounded by millions of jellyfish, which have evolved to lose their stinging ability due to the absence of predators, was one of the most breathtaking experiences I’ve ever had.Credit: Dani Escayola (Spain) / United Nations World Oceans Day www.unworldoceansday.org Underwater Seascapes — 2nd PlaceThis shot captures a school of rays resting at a cleaning station in Mauritius, where strong currents once attracted them regularly. Some rays grew accustomed to divers, allowing close encounters like this. Sadly, after the severe bleaching that the reefs here suffered last year, such gatherings have become rare, and I fear I may not witness this again at the same spot.Credit: Gerald Rambert (Mauritius) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 3rd PlaceShot in Cuba’s Jardines de la Reina—a protected shark sanctuary—this image captures a Caribbean reef shark weaving through a group of silky sharks near the surface. Using a slow shutter and strobes as the shark pivoted sharply, the motion blurred into a wave-like arc across its head, lit by the golden hues of sunset. The abundance and behavior of sharks here is a living symbol of what protected oceans can look like.Credit: Steven Lopez (USA) / United Nations World Oceans Day www.unworldoceansday.org  Above Water Seascapes — 2nd PlaceNorthern gannets (Morus bassanus) soar above the dramatic cliffs of Scotland’s Hermaness National Nature Reserve, their sleek white bodies and black-tipped wings slicing through the Shetland winds. These seabirds, the largest in the North Atlantic, are renowned for their striking plunge-dives, reaching speeds up to 100 kph (60 mph) as they hunt for fish beneath the waves. The cliffs of Hermaness provide ideal nesting sites, with updrafts aiding their take-offs and landings. Each spring, thousands return to this rugged coastline, forming one of the UK’s most significant gannet colonies. It was a major challenge to take photos at the edge of these cliffs at almost 200 meters (650 feet) with the winds up to 30 kph (20 mph).Credit: Nur Tucker (UK/Turkey) / United Nations World Oceans Day www.unworldoceansday.org Above Water Seascapes — Honorable MentionA South Atlantic swell breaks on the Dungeons Reef off the Cape Peninsula, South Africa, shot while photographing a big-wave surf session in October 2017. It’s the crescendoing sounds of these breaking swells that always amazes me.Credit: Ken Findlay (South Africa) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — Honorable MentionHumpback whales in their thousands migrate along the Ningaloo Reef in Western Australia every year on the way to and from their calving grounds. In four seasons of swimming with them on the reef here, this is the only encounter I’ve had like this one. This pair of huge adult whales repeatedly spy-hopped alongside us, seeking to interact with and investigate us, leaving me completely breathless. The female in the foreground was much more confident than the male behind and would constantly make close approaches, whilst the male hung back a little, still interested but shy. After more than 10 years working with wildlife in the water, this was one of the best experiences of my life.Credit: Ollie Clarke (UK) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces — 2nd PlaceOn one of my many blackwater dives in Anilao, in the Philippines, my guide and I spotted something moving erratically at a depth of around 20 meters (65 feet), about 10 to 15 centimeters in size. We quickly realized that it was a rare blanket octopus (Tremoctopus sp.). As we approached, it opened up its beautiful blanket, revealing its multicolored mantle. I managed to take a few shots before it went on its way. I felt truly privileged to have captured this fascinating deep-sea cephalopod. Among its many unique characteristics, this species exhibits some of the most extreme sexual size-dimorphism in nature, with females weighing up to 40,000 times more than males.Credit: Giacomo Marchione (Italy) / United Nations World Oceans Day www.unworldoceansday.org Big and Small Underwater Faces – WinnerThis photo of a Japanese warbonnet (Chirolophis japonicus) was captured in the Sea of Japan, about 50 miles (80 kilometers) southwest of Vladivostok, Russia. I found the ornate fish at a depth of about 30 meters (100 feet), under the stern of a shipwreck. This species does not appear to be afraid of divers—on the contrary, it seems to enjoy the attention—and it even tried to sit on the dome port of my camera.Credit: Andrey Nosik (Russia) / United Nations World Oceans Day www.unworldoceansday.org Wonder: Sustaining What Sustains Us — 2nd PlaceA juvenile pinnate batfish (Platax pinnatus) captured with a slow shutter speed, a snooted light, and deliberate camera panning to create a sense of motion and drama. Juvenile pinnate batfish are known for their striking black bodies outlined in vibrant orange—a coloration they lose within just a few months as they mature. I encountered this restless subject in the tropical waters of Indonesia’s Lembeh Strait. Capturing this image took patience and persistence over two dives, as these active young fish constantly dart for cover in crevices, making the shot particularly challenging.Credit: Luis Arpa (Spain) / United Nations World Oceans Day www.unworldoceansday.org
    0 Commentarios 0 Acciones
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Commentarios 0 Acciones
  • From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?

    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report.
    The deal surprised the industry as the two are seen as major AI rivals.
    Signs of friction between OpenAI and Microsoft may have also fueled the move.
    The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025.

    In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs.
    The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft.
    Why the Deal Surprised the Tech Industry
    The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider.
    Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product.

    A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result.
    Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024. 
    And then there’s this gem:

    With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice.
    In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed.
    It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft?
    In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception.
    Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy. 
    Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025.
    If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later.
    While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months.
    In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March.
    The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity.

    In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector.

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #rivals #partners #whats #with #google
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bardto compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence. Defined as when OpenAI develops AI systems that generate B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to from B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unitcalled Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #rivals #partners #whats #with #google
    TECHREPORT.COM
    From Rivals to Partners: What’s Up with the Google and OpenAI Cloud Deal?
    Google and OpenAI struck a cloud computing deal in May, according to a Reuters report. The deal surprised the industry as the two are seen as major AI rivals. Signs of friction between OpenAI and Microsoft may have also fueled the move. The partnership is a win-win.OpenAI gets more badly needed computing resources while Google profits from its $75B investment to boost its cloud computing capacity in 2025. In a surprise move, Google and OpenAI inked a deal that will see the AI rivals partnering to address OpenAI’s growing cloud computing needs. The story, reported by Reuters, cited anonymous sources saying that the deal had been discussed for months and finalized in May. Around this time, OpenAI has struggled to keep up with demand as its number of weekly active users and business users grew in Q1 2025. There’s also speculation of friction between OpenAI and its biggest investor Microsoft. Why the Deal Surprised the Tech Industry The rivalry between the two companies hardly needs an introduction. When OpenAI’s ChatGPT launched in November 2022, it posed a huge threat to Google that triggered a code red within the search giant and cloud services provider. Since then, Google has launched Bard (now known as Gemini) to compete with OpenAI head-on. However, it had to play catch up with OpenAI’s more advanced ChatGPT AI chatbot. This led to numerous issues with Bard, with critics referring to it as a half-baked product. A post on X in February 2023 showed the Bard AI chatbot erroneously stating that the James Webb Telescope took the first picture of an exoplanet. It was, in fact, the European Southern Observatory’s Very Large Telescope that did this in 2004. Google’s parent company Alphabet lost $100B off its market value within 24 hours as a result. Two years on, Gemini made significant strides in terms of accuracy, quoting sources, and depth of information, but is still prone to hallucinations from time to time. You can see examples of these posted on social media, like telling a user to make spicy spaghetti with gasoline or the AI thinking it’s still 2024.  And then there’s this gem: With the entire industry shifting towards more AI integrations, Google went ahead and integrated its AI suite into Search via AI Overviews. It then doubled down on this integration with AI Mode, an experimental feature that lets you perform AI-powered searches by typing in a question, uploading a photo, or using your voice. In the future, AI Mode from Google Search could be a viable competitor to ChatGPT—unless of course, Google decides to bin it along with many of its previous products. Given the scope of the investment, and Gemini’s significant improvement, we doubt AI + Search will be axed. It’s a Win-Win for Google and OpenAI—Not So Much for Microsoft? In the business world, money and the desire for expansion can break even the biggest rivalries. And the one between the two tech giants isn’t an exception. Partly, it could be attributed to OpenAI’s relationship with Microsoft. Although the Redmond, Washington-based company has invested billions in OpenAI and has the resources to meet the latter’s cloud computing needs, their partnership hasn’t always been rosy.  Some would say it began when OpenAI CEO Sam Altman was briefly ousted in November 2023, which put a strain on the ‘best bromance in tech’ between him and Microsoft CEO Satya Nadella. Then last year, Microsoft added OpenAI to its list of competitors in the AI space before eventually losing its status as OpenAI’s exclusive cloud provider in January 2025. If that wasn’t enough, there’s also the matter of the two companies’ goal of achieving artificial general intelligence (AGI). Defined as when OpenAI develops AI systems that generate $100B in profits, reaching AGI means Microsoft will lose access to the former’s technology. With the company behind ChatGPT expecting to triple its 2025 revenue to $12.7 from $3.7B the previous year, this could happen sooner rather than later. While OpenAI already has deals with Microsoft, Oracle, and CoreWeave to provide it with cloud services and access to infrastructure, it needs more and soon as the company has seen massive growth in the past few months. In February, OpenAI announced that it had over 400M weekly active users, up from 300M in December 2024. Meanwhile, the number of its business users who use ChatGPT Enterprise, ChatGPT Team, and ChatGPT Edu products also jumped from 2M in February to 3M in March. The good news is Google is more than ready to deliver. Its parent company has earmarked $75B towards its investments in AI this year, which includes boosting its cloud computing capacity. In April, Google launched its 7th generation tensor processing unit (TPU) called Ironwood, which has been designed specifically for inference. According to the company, the new TPU will help power AI models that will ‘proactively retrieve and generate data to collaboratively deliver insights and answers, not just data.’The deal with OpenAI can be seen as a vote of confidence in Google’s cloud computing capability that competes with the likes of Microsoft Azure and Amazon Web Services. It also expands Google’s vast client list that includes tech, gaming, entertainment, and retail companies, as well as organizations in the public sector. As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    0 Commentarios 0 Acciones
  • How addresses are collected and put on people finder sites

    Published
    June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles!
    Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    #how #addresses #are #collected #put
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online.How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online.How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online.What are the risks of having your address on people finder sites?The Federal Trade Commissionadvises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com. #how #addresses #are #collected #put
    WWW.FOXNEWS.COM
    How addresses are collected and put on people finder sites
    Published June 14, 2025 10:00am EDT close Top lawmaker on cybersecurity panel talks threats to US agriculture Senate Armed Services Committee member Mike Rounds, R-S.D., speaks to Fox News Digital NEWYou can now listen to Fox News articles! Your home address might be easier to find online than you think. A quick search of your name could turn up past and current locations, all thanks to people finder sites. These data broker sites quietly collect and publish personal details without your consent, making your privacy vulnerable with just a few clicks.Sign up for my FREE CyberGuy ReportGet my best tech tips, urgent security alerts, and exclusive deals delivered straight to your inbox. Plus, you’ll get instant access to my Ultimate Scam Survival Guide — free when you join. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How your address gets exposed online and who’s using itIf you’ve ever searched for your name and found personal details, like your address, on unfamiliar websites, you’re not alone. People finder platforms collect this information from public records and third-party data brokers, then publish and share it widely. They often link your address to other details such as phone numbers, email addresses and even relatives.11 EASY WAYS TO PROTECT YOUR ONLINE PRIVACY IN 2025While this data may already be public in various places, these sites make it far easier to access and monetize it at scale. In one recent breach, more than 183 million login credentials were exposed through an unsecured database. Many of these records were linked to physical addresses, raising concerns about how multiple sources of personal data can be combined and exploited.Although people finder sites claim to help reconnect friends or locate lost contacts, they also make sensitive personal information available to anyone willing to pay. This includes scammers, spammers and identity thieves who use it for fraud, harassment, and targeted scams. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)How do people search sites get your home address?First, let’s define two sources of information; public and private databases that people search sites use to get your detailed profile, including your home address. They run an automated search on these databases with key information about you and add your home address from the search results. 1. Public sourcesYour home address can appear in:Property deeds: When you buy or sell a home, your name and address become part of the public record.Voter registration: You need to list your address when voting.Court documents: Addresses appear in legal filings or lawsuits.Marriage and divorce records: These often include current or past addresses.Business licenses and professional registrations: If you own a business or hold a license, your address can be listed.WHAT IS ARTIFICIAL INTELLIGENCE (AI)?These records are legal to access, and people finder sites collect and repackage them into detailed personal profiles.2. Private sourcesOther sites buy your data from companies you’ve interacted with:Online purchases: When you buy something online, your address is recorded and can be sold to marketing companies.Subscriptions and memberships: Magazines, clubs and loyalty programs often share your information.Social media platforms: Your location or address details can be gathered indirectly from posts, photos or shared information.Mobile apps and websites: Some apps track your location.People finder sites buy this data from other data brokers and combine it with public records to build complete profiles that include address information. A woman searching for herself online. (Kurt "CyberGuy" Knutsson)What are the risks of having your address on people finder sites?The Federal Trade Commission (FTC) advises people to request the removal of their private data, including home addresses, from people search sites due to the associated risks of stalking, scamming and other crimes.People search sites are a goldmine for cybercriminals looking to target and profile potential victims as well as plan comprehensive cyberattacks. Losses due to targeted phishing attacks increased by 33% in 2024, according to the FBI. So, having your home address publicly accessible can lead to several risks:Stalking and harassment: Criminals can easily find your home address and threaten you.Identity theft: Scammers can use your address and other personal information to impersonate you or fraudulently open accounts.Unwanted contact: Marketers and scammers can use your address to send junk mail or phishing or brushing scams.Increased financial risks: Insurance companies or lenders can use publicly available address information to unfairly decide your rates or eligibility.Burglary and home invasion: Criminals can use your location to target your home when you’re away or vulnerable.How to protect your home addressThe good news is that you can take steps to reduce the risks and keep your address private. However, keep in mind that data brokers and people search sites can re-list your information after some time, so you might need to request data removal periodically.I recommend a few ways to delete your private information, including your home address, from such websites.1. Use personal data removal services: Data brokers can sell your home address and other personal data to multiple businesses and individuals, so the key is to act fast. If you’re looking for an easier way to protect your privacy, a data removal service can do the heavy lifting for you, automatically requesting data removal from brokers and tracking compliance.While no service can guarantee the complete removal of your data from the internet, a data removal service is really a smart choice. They aren’t cheap — and neither is your privacy. These services do all the work for you by actively monitoring and systematically erasing your personal information from hundreds of websites. It’s what gives me peace of mind and has proven to be the most effective way to erase your personal data from the internet. By limiting the information available, you reduce the risk of scammers cross-referencing data from breaches with information they might find on the dark web, making it harder for them to target you. Check out my top picks for data removal services here. Get a free scan to find out if your personal information is already out on the web2. Opt out manually : Use a free scanner provided by a data removal service to check which people search sites that list your address. Then, visit each of these websites and look for an opt-out procedure or form: keywords like "opt out," "delete my information," etc., point the way.Follow each site’s opt-out process carefully, and confirm they’ve removed all your personal info, otherwise, it may get relisted.3. Monitor your digital footprint: I recommend regularly searching online for your name to see if your location is publicly available. If only your social media profile pops up, there’s no need to worry. However, people finder sites tend to relist your private information, including your home address, after some time.4. Limit sharing your address online: Be careful about sharing your home address on social media, online forms and apps. Review privacy settings regularly, and only provide your address when absolutely necessary. Also, adjust your phone settings so that apps don’t track your location.Kurt’s key takeawaysYour home address is more vulnerable than you think. People finder sites aggregate data from public records and private sources to display your address online, often without your knowledge or consent. This can lead to serious privacy and safety risks. Taking proactive steps to protect your home address is essential. Do it manually or use a data removal tool for an easier process. By understanding how your location is collected and taking measures to remove your address from online sites, you can reclaim control over your personal data.CLICK HERE TO GET THE FOX NEWS APPHow do you feel about companies making your home address so easy to find? Let us know by writing us at Cyberguy.com/ContactFor more of my tech tips and security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/NewsletterAsk Kurt a question or let us know what stories you'd like us to cover.Follow Kurt on his social channels:Answers to the most-asked CyberGuy questions:New from Kurt:Copyright 2025 CyberGuy.com. All rights reserved.   Kurt "CyberGuy" Knutsson is an award-winning tech journalist who has a deep love of technology, gear and gadgets that make life better with his contributions for Fox News & FOX Business beginning mornings on "FOX & Friends." Got a tech question? Get Kurt’s free CyberGuy Newsletter, share your voice, a story idea or comment at CyberGuy.com.
    0 Commentarios 0 Acciones
  • SAG-AFTRA proposed AI protections will let performers send their digital replicas on strike

    TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.SAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeSAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeA tentative agreement proposed by the union will also require game studios to secure informed consent from performers when using AI.Chris Kerr, Senior Editor, NewsJune 13, 20251 Min ReadImage via SAG-AFTRAPerformer union SAG-AFTRA has outlined what sort of AI protections have been secured through its new-look Interactive Media Agreement.The union, which this week suspended a year-long strike after finally agreeing terms with game studios signed to the IMA, said the new contract includes "important guardrails and gains around AI" such as the need for informed consent when deploying AI tech and the ability for performers to suspend consent for Digital Replicas during a strike—effectively sending their digital counterparts to the picket line.Compensation gains include the need for collectively-bargained minimums covering the use of Digital Replicas created with IMA-covered performances and higher minimumsfor what SAG-AFTRA calls "Real Time Generation," which is when a Digital Replica-voiced chatbot might be embedded in a video game.Secondary Performance Payments will also require studios to compensate performers when visual performances are reused in additional projects. The tentative agreement has already been approved by the SAG-AFTRA National Board and has now been submitted to union members for ratification.If ratified, it will also provide compounded compensation increases at a rate of 15.17 percent plus additional 3 percent increases in November 2025, November 2026, and November 2027. In addition, the overtime rate maximum for overscale performers will be based on double scale.Related:The full terms off the three-year deal will be released on June 18 alongside other ratification materials. Eligible SAG-AFTRA members will have until 5pm PDT on Wednesday, July 9, to vote on the agreement.  about:Labor & UnionizationAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    #sagaftra #proposed #protections #will #let
    SAG-AFTRA proposed AI protections will let performers send their digital replicas on strike
    TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.SAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeSAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeA tentative agreement proposed by the union will also require game studios to secure informed consent from performers when using AI.Chris Kerr, Senior Editor, NewsJune 13, 20251 Min ReadImage via SAG-AFTRAPerformer union SAG-AFTRA has outlined what sort of AI protections have been secured through its new-look Interactive Media Agreement.The union, which this week suspended a year-long strike after finally agreeing terms with game studios signed to the IMA, said the new contract includes "important guardrails and gains around AI" such as the need for informed consent when deploying AI tech and the ability for performers to suspend consent for Digital Replicas during a strike—effectively sending their digital counterparts to the picket line.Compensation gains include the need for collectively-bargained minimums covering the use of Digital Replicas created with IMA-covered performances and higher minimumsfor what SAG-AFTRA calls "Real Time Generation," which is when a Digital Replica-voiced chatbot might be embedded in a video game.Secondary Performance Payments will also require studios to compensate performers when visual performances are reused in additional projects. The tentative agreement has already been approved by the SAG-AFTRA National Board and has now been submitted to union members for ratification.If ratified, it will also provide compounded compensation increases at a rate of 15.17 percent plus additional 3 percent increases in November 2025, November 2026, and November 2027. In addition, the overtime rate maximum for overscale performers will be based on double scale.Related:The full terms off the three-year deal will be released on June 18 alongside other ratification materials. Eligible SAG-AFTRA members will have until 5pm PDT on Wednesday, July 9, to vote on the agreement.  about:Labor & UnionizationAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like #sagaftra #proposed #protections #will #let
    WWW.GAMEDEVELOPER.COM
    SAG-AFTRA proposed AI protections will let performers send their digital replicas on strike
    TechTarget and Informa Tech’s Digital Business Combine.TechTarget and InformaTechTarget and Informa Tech’s Digital Business Combine.Together, we power an unparalleled network of 220+ online properties covering 10,000+ granular topics, serving an audience of 50+ million professionals with original, objective content from trusted sources. We help you gain critical insights and make more informed decisions across your business priorities.SAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeSAG-AFTRA proposed AI protections will let performers send their digital replicas on strikeA tentative agreement proposed by the union will also require game studios to secure informed consent from performers when using AI.Chris Kerr, Senior Editor, NewsJune 13, 20251 Min ReadImage via SAG-AFTRAPerformer union SAG-AFTRA has outlined what sort of AI protections have been secured through its new-look Interactive Media Agreement (IMA).The union, which this week suspended a year-long strike after finally agreeing terms with game studios signed to the IMA, said the new contract includes "important guardrails and gains around AI" such as the need for informed consent when deploying AI tech and the ability for performers to suspend consent for Digital Replicas during a strike—effectively sending their digital counterparts to the picket line.Compensation gains include the need for collectively-bargained minimums covering the use of Digital Replicas created with IMA-covered performances and higher minimums (7.5x scale) for what SAG-AFTRA calls "Real Time Generation," which is when a Digital Replica-voiced chatbot might be embedded in a video game.Secondary Performance Payments will also require studios to compensate performers when visual performances are reused in additional projects. The tentative agreement has already been approved by the SAG-AFTRA National Board and has now been submitted to union members for ratification.If ratified, it will also provide compounded compensation increases at a rate of 15.17 percent plus additional 3 percent increases in November 2025, November 2026, and November 2027. In addition, the overtime rate maximum for overscale performers will be based on double scale.Related:The full terms off the three-year deal will be released on June 18 alongside other ratification materials. Eligible SAG-AFTRA members will have until 5pm PDT on Wednesday, July 9, to vote on the agreement. Read more about:Labor & UnionizationAbout the AuthorChris KerrSenior Editor, News, GameDeveloper.comGame Developer news editor Chris Kerr is an award-winning journalist and reporter with over a decade of experience in the game industry. His byline has appeared in notable print and digital publications including Edge, Stuff, Wireframe, International Business Times, and PocketGamer.biz. Throughout his career, Chris has covered major industry events including GDC, PAX Australia, Gamescom, Paris Games Week, and Develop Brighton. He has featured on the judging panel at The Develop Star Awards on multiple occasions and appeared on BBC Radio 5 Live to discuss breaking news.See more from Chris KerrDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commentarios 0 Acciones
Resultados de la búsqueda