• Who knew basketball needed an interactive LED floor? Seriously? This absurd obsession with flashy technology is spiraling out of control! ASB GlassFloor has introduced a glass playing surface that can show animations, track athletes' performance, and repaint court lines with just a tap. What’s next? Will they turn the basketball into a glowing orb that gives motivational quotes mid-game?

    Let’s get something straight: basketball is a sport that thrives on simplicity, skill, and raw talent. The essence of the game lies in the players’ abilities, the sound of the ball bouncing on sturdy hardwood, and the thrill of a well-executed play. But no, that’s not enough for the tech-obsessed minds out there. Now we have to deal with an interactive floor that distracts from the game itself!

    Why in the world do we need animations on the court? Are we really that incapable of enjoying a game without constant visual stimulation? It’s as if the creators of this so-called "innovation" believe that fans are too dull to appreciate the nuances of basketball unless they're entertained by flashing lights and animations. This is a disgrace to the sport!

    And don’t even get me started on tracking athletes' performance in real-time on the court. As if we didn’t already have enough statistics thrown at us during a game! Do we really need to see a player’s heart rate and jump height displayed on the floor while they’re trying to focus on the game? This is a violation of the fundamental spirit of competition. Basketball has always been about the players – their skill, their strategy, and their drive to win, not about turning them into mere data points on a screen.

    Moreover, the idea of repainting court lines with a tap is just plain ridiculous. What’s wrong with the traditional method? A few lines on the court have worked just fine for decades! Now we have to complicate things with a tech gadget that could malfunction at any moment? Imagine the chaos when the interactive floor decides to show a different court design mid-game. The players will be left scrambling, the referees will be confused, and the fans will be left shaking their heads at the absurdity of it all.

    And let’s be real – this gimmick is nothing but a marketing ploy. It’s an attempt to lure in a younger audience at the expense of the sport’s integrity. Yes, pros in Europe are already playing on it, but that doesn’t mean it’s a good idea! Just because something is trendy doesn’t make it right. Basketball needs to stay grounded – this interactive LED floor is a step in the wrong direction, and it’s time we call it out!

    Stop letting technology dictate how we enjoy sports. Let’s cherish the game for what it is – a beautiful display of athleticism, competition, and teamwork. Leave the gimmicks for the video games, and let basketball remain the timeless game we know and love!

    #Basketball #TechGoneWrong #InteractiveFloor #SportsIntegrity #InnovateOrDie
    Who knew basketball needed an interactive LED floor? Seriously? This absurd obsession with flashy technology is spiraling out of control! ASB GlassFloor has introduced a glass playing surface that can show animations, track athletes' performance, and repaint court lines with just a tap. What’s next? Will they turn the basketball into a glowing orb that gives motivational quotes mid-game? Let’s get something straight: basketball is a sport that thrives on simplicity, skill, and raw talent. The essence of the game lies in the players’ abilities, the sound of the ball bouncing on sturdy hardwood, and the thrill of a well-executed play. But no, that’s not enough for the tech-obsessed minds out there. Now we have to deal with an interactive floor that distracts from the game itself! Why in the world do we need animations on the court? Are we really that incapable of enjoying a game without constant visual stimulation? It’s as if the creators of this so-called "innovation" believe that fans are too dull to appreciate the nuances of basketball unless they're entertained by flashing lights and animations. This is a disgrace to the sport! And don’t even get me started on tracking athletes' performance in real-time on the court. As if we didn’t already have enough statistics thrown at us during a game! Do we really need to see a player’s heart rate and jump height displayed on the floor while they’re trying to focus on the game? This is a violation of the fundamental spirit of competition. Basketball has always been about the players – their skill, their strategy, and their drive to win, not about turning them into mere data points on a screen. Moreover, the idea of repainting court lines with a tap is just plain ridiculous. What’s wrong with the traditional method? A few lines on the court have worked just fine for decades! Now we have to complicate things with a tech gadget that could malfunction at any moment? Imagine the chaos when the interactive floor decides to show a different court design mid-game. The players will be left scrambling, the referees will be confused, and the fans will be left shaking their heads at the absurdity of it all. And let’s be real – this gimmick is nothing but a marketing ploy. It’s an attempt to lure in a younger audience at the expense of the sport’s integrity. Yes, pros in Europe are already playing on it, but that doesn’t mean it’s a good idea! Just because something is trendy doesn’t make it right. Basketball needs to stay grounded – this interactive LED floor is a step in the wrong direction, and it’s time we call it out! Stop letting technology dictate how we enjoy sports. Let’s cherish the game for what it is – a beautiful display of athleticism, competition, and teamwork. Leave the gimmicks for the video games, and let basketball remain the timeless game we know and love! #Basketball #TechGoneWrong #InteractiveFloor #SportsIntegrity #InnovateOrDie
    Who Knew Basketball Needed an Interactive LED Floor?
    ASB GlassFloor makes a glass playing surface for sports arenas that can show animations, track athletes' performance, and repaint court lines with a tap. Pros in Europe are already playing on it.
    Like
    Love
    Wow
    Angry
    Sad
    452
    1 Комментарии 0 Поделились
  • In a world where creativity reigns supreme, Adobe has just gifted us with a shiny new toy: the Firefly Boards. Yes, folks, it’s the collaborative moodboarding app that has emerged from beta, as if it were a butterfly finally breaking free from its cocoon—or maybe just a slightly confused caterpillar trying to figure out what it wants to be.

    Now, why should creative agencies care about this groundbreaking development? Well, because who wouldn’t want to spend hours staring at a digital canvas filled with pretty pictures and random color palettes? Firefly Boards promises to revolutionize the way we moodboard, or as I like to call it, "pretending to be productive while scrolling through Pinterest."

    Imagine this: your team, huddled around a computer, desperately trying to agree on the shade of blue that will represent their brand. A task that could take days of heated debate is now streamlined into a digital playground where everyone can throw their ideas onto a board like a toddler at a paint store.

    But let's be real. Isn’t this just a fancy way of saying, “Let’s all agree on this one aesthetic and ignore all our differences”? Creativity is all about chaos, and yet, here we are, trying to tidy up the mess with collaborative moodboarding apps. What’s next? A group hug to decide on the font size?

    Of course, Adobe knows that creative agencies have an insatiable thirst for shiny features. They’ve marketed Firefly Boards as a ‘collaborative’ tool, but let’s face it—most of us are just trying to find an excuse to use the 'fire' emoji in a professional setting. It’s as if they’re saying, “Trust us, this will make your life easier!” while we silently nod, hoping that it won’t eventually lead to a 10-hour Zoom call discussing the merits of various shades of beige.

    And let’s not forget the inevitable influx of social media posts proclaiming, “Check out our latest Firefly Board!” — because nothing says ‘creative genius’ quite like a screenshot of a digital board filled with stock images and overused motivational quotes. Can’t wait to see how many ‘likes’ that garners!

    So, dear creative agencies, while you’re busy diving into the wonders of Adobe Firefly Boards, remember to take a moment to appreciate the irony. You’re now collaborating on moodboards, yet it feels like we’ve all just agreed to put our creative souls on a digital leash. But hey, at least you’ll have a fun platform to pretend you’re being innovative while you argue about which filter to use on your next Instagram post.

    #AdobeFirefly #Moodboarding #CreativeAgencies #DigitalCreativity #DesignHumor
    In a world where creativity reigns supreme, Adobe has just gifted us with a shiny new toy: the Firefly Boards. Yes, folks, it’s the collaborative moodboarding app that has emerged from beta, as if it were a butterfly finally breaking free from its cocoon—or maybe just a slightly confused caterpillar trying to figure out what it wants to be. Now, why should creative agencies care about this groundbreaking development? Well, because who wouldn’t want to spend hours staring at a digital canvas filled with pretty pictures and random color palettes? Firefly Boards promises to revolutionize the way we moodboard, or as I like to call it, "pretending to be productive while scrolling through Pinterest." Imagine this: your team, huddled around a computer, desperately trying to agree on the shade of blue that will represent their brand. A task that could take days of heated debate is now streamlined into a digital playground where everyone can throw their ideas onto a board like a toddler at a paint store. But let's be real. Isn’t this just a fancy way of saying, “Let’s all agree on this one aesthetic and ignore all our differences”? Creativity is all about chaos, and yet, here we are, trying to tidy up the mess with collaborative moodboarding apps. What’s next? A group hug to decide on the font size? Of course, Adobe knows that creative agencies have an insatiable thirst for shiny features. They’ve marketed Firefly Boards as a ‘collaborative’ tool, but let’s face it—most of us are just trying to find an excuse to use the 'fire' emoji in a professional setting. It’s as if they’re saying, “Trust us, this will make your life easier!” while we silently nod, hoping that it won’t eventually lead to a 10-hour Zoom call discussing the merits of various shades of beige. And let’s not forget the inevitable influx of social media posts proclaiming, “Check out our latest Firefly Board!” — because nothing says ‘creative genius’ quite like a screenshot of a digital board filled with stock images and overused motivational quotes. Can’t wait to see how many ‘likes’ that garners! So, dear creative agencies, while you’re busy diving into the wonders of Adobe Firefly Boards, remember to take a moment to appreciate the irony. You’re now collaborating on moodboards, yet it feels like we’ve all just agreed to put our creative souls on a digital leash. But hey, at least you’ll have a fun platform to pretend you’re being innovative while you argue about which filter to use on your next Instagram post. #AdobeFirefly #Moodboarding #CreativeAgencies #DigitalCreativity #DesignHumor
    Why creative agencies need to know about new Adobe Firefly Boards
    The collaborative moodboarding app is now out of beta.
    Like
    Love
    Wow
    Angry
    Sad
    512
    1 Комментарии 0 Поделились
  • Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon

    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations.
    Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered.
    Frontiers: What inspired you to become a researcher?
    Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved.
    F: Can you tell us about the research you’re currently working on?
    BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation.
    Local boating the Amazon River. CREDIT: Beatriz Cosendey.
    F: Could you tell us about one of the legends surrounding anacondas?
    BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty.
    F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity?
    BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently.
    A giant anaconda is being measured. Credit: Pedro Calazans.
    F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play?
    BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?”
    For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste.
    One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey.
    Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey.
    We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals.
    F: Are there any common misconceptions about this area of research? How would you address them?
    BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data.
    However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework.
    To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society.
    The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey.
    F: What are some of the areas of research you’d like to see tackled in the years ahead?
    BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere.
    F: How has open science benefited the reach and impact of your research?
    BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups.
    The Q&A can also be read here.
    #qampampa #how #anacondas #chickens #locals
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here. #qampampa #how #anacondas #chickens #locals
    WWW.POPSCI.COM
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals (up to around 2–2.5 meters), while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is her [the anaconda’s] favorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh (to block smaller animals), and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here.
    Like
    Love
    Wow
    Sad
    Angry
    443
    2 Комментарии 0 Поделились
  • Four science-based rules that will make your conversations flow

    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy
    Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats.
    David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance?
    Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail.

    Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from?
    I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently.

    Receive a weekly dose of discovery in your inbox.

    Sign up to newsletter

    The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too.
    “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about?
    My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different.
    Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos
    What’s your advice when making these decisions?
    There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else.
    After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it.
    The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is?
    Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already.

    What kinds of questions should we be asking – and avoiding?
    In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful.
    There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it.
    Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno
    What are the benefits of levity?
    When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again.
    Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room.
    Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian?
    Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud.

    This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like?
    Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it.
    Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone?
    Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far.
    Topics:
    #four #sciencebased #rules #that #will
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics: #four #sciencebased #rules #that #will
    WWW.NEWSCIENTIST.COM
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, and [my students] all get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomans [at Imperial College London] and I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Комментарии 0 Поделились
  • How AI is reshaping the future of healthcare and medical research

    Transcript       
    PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”          
    This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.   
    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?    
    In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.” 
    In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.   
    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open. 
    As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.  
    Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home. 
    Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.     
    Here’s my conversation with Bill Gates and Sébastien Bubeck. 
    LEE: Bill, welcome. 
    BILL GATES: Thank you. 
    LEE: Seb … 
    SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here. 
    LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening? 
    And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?  
    GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines. 
    And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.  
    And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning. 
    LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that? 
    GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, … 
    LEE: Right.  
    GATES: … that is a bit weird.  
    LEE: Yeah. 
    GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training. 
    LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. 
    BUBECK: Yes.  
    LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you. 
    BUBECK: Yeah. 
    LEE: And so what were your first encounters? Because I actually don’t remember what happened then. 
    BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3. 
    I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1. 
    So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts. 
    So this was really, to me, the first moment where I saw some understanding in those models.  
    LEE: So this was, just to get the timing right, that was before I pulled you into the tent. 
    BUBECK: That was before. That was like a year before. 
    LEE: Right.  
    BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4. 
    So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.  
    So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x. 
    And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?  
    LEE: Yeah.
    BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.  
    LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine. 
    And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.  
    And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.  
    I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book. 
    But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements. 
    But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today? 
    You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.  
    Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork? 
    GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.  
    It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision. 
    But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view. 
    LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you? 
    BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong? 
    Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.  
    Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them. 
    And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.  
    Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way. 
    It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine. 
    LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all? 
    GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that. 
    The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa,
    So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.  
    LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking? 
    GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.  
    The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.  
    LEE: Right.  
    GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.  
    LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication. 
    BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI. 
    It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for. 
    LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes. 
    I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?  
    That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that? 
    BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there. 
    Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad. 
    But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model. 
    So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model. 
    LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and … 
    BUBECK: It’s a very difficult, very difficult balance. 
    LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models? 
    GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there. 
    Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?  
    Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there.
    LEE: Yeah.
    GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake. 
    LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on. 
    BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything. 
    That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind. 
    LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two? 
    BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it. 
    LEE: So we have about three hours of stuff to talk about, but our time is actually running low.
    BUBECK: Yes, yes, yes.  
    LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now? 
    GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.  
    The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities. 
    And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period. 
    LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers? 
    GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them. 
    LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.  
    I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why. 
    BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.  
    And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.  
    LEE: Yeah. 
    BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.  
    Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not. 
    Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision. 
    LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist … 
    BUBECK: Yeah.
    LEE: … or an endocrinologist might not.
    BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know.
    LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today? 
    BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later. 
    And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …  
    LEE: Will AI prescribe your medicines? Write your prescriptions? 
    BUBECK: I think yes. I think yes. 
    LEE: OK. Bill? 
    GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate?
    And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries. 
    You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that. 
    LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.  
    I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  
    GATES: Yeah. Thanks, you guys. 
    BUBECK: Thank you, Peter. Thanks, Bill. 
    LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.   
    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.  
    And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.  
    One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.  
    HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings. 
    You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.  
    If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  
    I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.  
    Until next time.  
    #how #reshaping #future #healthcare #medical
    How AI is reshaping the future of healthcare and medical research
    Transcript        PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”           This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.  The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.      Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weaknessthat, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent.  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSRto join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well.My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair.And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE:One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce aboutor indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients.Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT. And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE, for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential.What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back thatversion of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF, where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGIthat kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects.So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and seeproduced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini. So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelectedjust on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.   GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.   I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   #how #reshaping #future #healthcare #medical
    WWW.MICROSOFT.COM
    How AI is reshaping the future of healthcare and medical research
    Transcript [MUSIC]      [BOOK PASSAGE]   PETER LEE: “In ‘The Little Black Bag,’ a classic science fiction story, a high-tech doctor’s kit of the future is accidentally transported back to the 1950s, into the shaky hands of a washed-up, alcoholic doctor. The ultimate medical tool, it redeems the doctor wielding it, allowing him to practice gratifyingly heroic medicine. … The tale ends badly for the doctor and his treacherous assistant, but it offered a picture of how advanced technology could transform medicine—powerful when it was written nearly 75 years ago and still so today. What would be the Al equivalent of that little black bag? At this moment when new capabilities are emerging, how do we imagine them into medicine?”   [END OF BOOK PASSAGE]     [THEME MUSIC]     This is The AI Revolution in Medicine, Revisited. I’m your host, Peter Lee.    Shortly after OpenAI’s GPT-4 was publicly released, Carey Goldberg, Dr. Zak Kohane, and I published The AI Revolution in Medicine to help educate the world of healthcare and medical research about the transformative impact this new generative AI technology could have. But because we wrote the book when GPT-4 was still a secret, we had to speculate. Now, two years later, what did we get right, and what did we get wrong?     In this series, we’ll talk to clinicians, patients, hospital administrators, and others to understand the reality of AI in the field and where we go from here.   [THEME MUSIC FADES] The book passage I read at the top is from “Chapter 10: The Big Black Bag.”  In imagining AI in medicine, Carey, Zak, and I included in our book two fictional accounts. In the first, a medical resident consults GPT-4 on her personal phone as the patient in front of her crashes. Within seconds, it offers an alternate response based on recent literature. In the second account, a 90-year-old woman with several chronic conditions is living independently and receiving near-constant medical support from an AI aide.    In our conversations with the guests we’ve spoken to so far, we’ve caught a glimpse of these predicted futures, seeing how clinicians and patients are actually using AI today and how developers are leveraging the technology in the healthcare products and services they’re creating. In fact, that first fictional account isn’t so fictional after all, as most of the doctors in the real world actually appear to be using AI at least occasionally—and sometimes much more than occasionally—to help in their daily clinical work. And as for the second fictional account, which is more of a science fiction account, it seems we are indeed on the verge of a new way of delivering and receiving healthcare, though the future is still very much open.  As we continue to examine the current state of AI in healthcare and its potential to transform the field, I’m pleased to welcome Bill Gates and Sébastien Bubeck.   Bill may be best known as the co-founder of Microsoft, having created the company with his childhood friend Paul Allen in 1975. He’s now the founder of Breakthrough Energy, which aims to advance clean energy innovation, and TerraPower, a company developing groundbreaking nuclear energy and science technologies. He also chairs the world’s largest philanthropic organization, the Gates Foundation, and focuses on solving a variety of health challenges around the globe and here at home.  Sébastien is a research lead at OpenAI. He was previously a distinguished scientist, vice president of AI, and a colleague of mine here at Microsoft, where his work included spearheading the development of the family of small language models known as Phi. While at Microsoft, he also coauthored the discussion-provoking 2023 paper “Sparks of Artificial General Intelligence,” which presented the results of early experiments with GPT-4 conducted by a small team from Microsoft Research.    [TRANSITION MUSIC]   Here’s my conversation with Bill Gates and Sébastien Bubeck.  LEE: Bill, welcome.  BILL GATES: Thank you.  LEE: Seb …  SÉBASTIEN BUBECK: Yeah. Hi, hi, Peter. Nice to be here.  LEE: You know, one of the things that I’ve been doing just to get the conversation warmed up is to talk about origin stories, and what I mean about origin stories is, you know, what was the first contact that you had with large language models or the concept of generative AI that convinced you or made you think that something really important was happening?  And so, Bill, I think I’ve heard the story about, you know, the time when the OpenAI folks—Sam Altman, Greg Brockman, and others—showed you something, but could we hear from you what those early encounters were like and what was going through your mind?   GATES: Well, I’d been visiting OpenAI soon after it was created to see things like GPT-2 and to see the little arm they had that was trying to match human manipulation and, you know, looking at their games like Dota that they were trying to get as good as human play. And honestly, I didn’t think the language model stuff they were doing, even when they got to GPT-3, would show the ability to learn, you know, in the same sense that a human reads a biology book and is able to take that knowledge and access it not only to pass a test but also to create new medicines.  And so my challenge to them was that if their LLM could get a five on the advanced placement biology test, then I would say, OK, it took biologic knowledge and encoded it in an accessible way and that I didn’t expect them to do that very quickly but it would be profound.   And it was only about six months after I challenged them to do that, that an early version of GPT-4 they brought up to a dinner at my house, and in fact, it answered most of the questions that night very well. The one it got totally wrong, we were … because it was so good, we kept thinking, Oh, we must be wrong. It turned out it was a math weakness [LAUGHTER] that, you know, we later understood that that was an area of, weirdly, of incredible weakness of those early models. But, you know, that was when I realized, OK, the age of cheap intelligence was at its beginning.  LEE: Yeah. So I guess it seems like you had something similar to me in that my first encounters, I actually harbored some skepticism. Is it fair to say you were skeptical before that?  GATES: Well, the idea that we’ve figured out how to encode and access knowledge in this very deep sense without even understanding the nature of the encoding, …  LEE: Right.   GATES: … that is a bit weird.   LEE: Yeah.  GATES: We have an algorithm that creates the computation, but even say, OK, where is the president’s birthday stored in there? Where is this fact stored in there? The fact that even now when we’re playing around, getting a little bit more sense of it, it’s opaque to us what the semantic encoding is, it’s, kind of, amazing to me. I thought the invention of knowledge storage would be an explicit way of encoding knowledge, not an implicit statistical training.  LEE: Yeah, yeah. All right. So, Seb, you know, on this same topic, you know, I got—as we say at Microsoft—I got pulled into the tent. [LAUGHS]  BUBECK: Yes.   LEE: Because this was a very secret project. And then, um, I had the opportunity to select a small number of researchers in MSR [Microsoft Research] to join and start investigating this thing seriously. And the first person I pulled in was you.  BUBECK: Yeah.  LEE: And so what were your first encounters? Because I actually don’t remember what happened then.  BUBECK: Oh, I remember it very well. [LAUGHS] My first encounter with GPT-4 was in a meeting with the two of you, actually. But my kind of first contact, the first moment where I realized that something was happening with generative AI, was before that. And I agree with Bill that I also wasn’t too impressed by GPT-3.  I though that it was kind of, you know, very naturally mimicking the web, sort of parroting what was written there in a nice way. Still in a way which seemed very impressive. But it wasn’t really intelligent in any way. But shortly after GPT-3, there was a model before GPT-4 that really shocked me, and this was the first image generation model, DALL-E 1.  So that was in 2021. And I will forever remember the press release of OpenAI where they had this prompt of an avocado chair and then you had this image of the avocado chair. [LAUGHTER] And what really shocked me is that clearly the model kind of “understood” what is a chair, what is an avocado, and was able to merge those concepts.  So this was really, to me, the first moment where I saw some understanding in those models.   LEE: So this was, just to get the timing right, that was before I pulled you into the tent.  BUBECK: That was before. That was like a year before.  LEE: Right.   BUBECK: And now I will tell you how, you know, we went from that moment to the meeting with the two of you and GPT-4.  So once I saw this kind of understanding, I thought, OK, fine. It understands concept, but it’s still not able to reason. It cannot—as, you know, Bill was saying—it cannot learn from your document. It cannot reason.   So I set out to try to prove that. You know, this is what I was in the business of at the time, trying to prove things in mathematics. So I was trying to prove that basically autoregressive transformers could never reason. So I was trying to prove this. And after a year of work, I had something reasonable to show. And so I had the meeting with the two of you, and I had this example where I wanted to say, there is no way that an LLM is going to be able to do x.  And then as soon as I … I don’t know if you remember, Bill. But as soon as I said that, you said, oh, but wait a second. I had, you know, the OpenAI crew at my house recently, and they showed me a new model. Why don’t we ask this new model this question?   LEE: Yeah. BUBECK: And we did, and it solved it on the spot. And that really, honestly, just changed my life. Like, you know, I had been working for a year trying to say that this was impossible. And just right there, it was shown to be possible.   LEE: [LAUGHS] One of the very first things I got interested in—because I was really thinking a lot about healthcare—was healthcare and medicine.  And I don’t know if the two of you remember, but I ended up doing a lot of tests. I ran through, you know, step one and step two of the US Medical Licensing Exam. Did a whole bunch of other things. I wrote this big report. It was, you know, I can’t remember … a couple hundred pages.   And I needed to share this with someone. I didn’t … there weren’t too many people I could share it with. So I sent, I think, a copy to you, Bill. Sent a copy to you, Seb.   I hardly slept for about a week putting that report together. And, yeah, and I kept working on it. But I was far from alone. I think everyone who was in the tent, so to speak, in those early days was going through something pretty similar. All right. So I think … of course, a lot of what I put in the report also ended up being examples that made it into the book.  But the main purpose of this conversation isn’t to reminisce about [LAUGHS] or indulge in those reminiscences but to talk about what’s happening in healthcare and medicine. And, you know, as I said, we wrote this book. We did it very, very quickly. Seb, you helped. Bill, you know, you provided a review and some endorsements.  But, you know, honestly, we didn’t know what we were talking about because no one had access to this thing. And so we just made a bunch of guesses. So really, the whole thing I wanted to probe with the two of you is, now with two years of experience out in the world, what, you know, what do we think is happening today?  You know, is AI actually having an impact, positive or negative, on healthcare and medicine? And what do we now think is going to happen in the next two years, five years, or 10 years? And so I realize it’s a little bit too abstract to just ask it that way. So let me just try to narrow the discussion and guide us a little bit.   Um, the kind of administrative and clerical work, paperwork, around healthcare—and we made a lot of guesses about that—that appears to be going well, but, you know, Bill, I know we’ve discussed that sometimes that you think there ought to be a lot more going on. Do you have a viewpoint on how AI is actually finding its way into reducing paperwork?  GATES: Well, I’m stunned … I don’t think there should be a patient-doctor meeting where the AI is not sitting in and both transcribing, offering to help with the paperwork, and even making suggestions, although the doctor will be the one, you know, who makes the final decision about the diagnosis and whatever prescription gets done.   It’s so helpful. You know, when that patient goes home and their, you know, son who wants to understand what happened has some questions, that AI should be available to continue that conversation. And the way you can improve that experience and streamline things and, you know, involve the people who advise you. I don’t understand why that’s not more adopted, because there you still have the human in the loop making that final decision.  But even for, like, follow-up calls to make sure the patient did things, to understand if they have concerns and knowing when to escalate back to the doctor, the benefit is incredible. And, you know, that thing is ready for prime time. That paradigm is ready for prime time, in my view.  LEE: Yeah, there are some good products, but it seems like the number one use right now—and we kind of got this from some of the previous guests in previous episodes—is the use of AI just to respond to emails from patients. [LAUGHTER] Does that make sense to you?  BUBECK: Yeah. So maybe I want to second what Bill was saying but maybe take a step back first. You know, two years ago, like, the concept of clinical scribes, which is one of the things that we’re talking about right now, it would have sounded, in fact, it sounded two years ago, borderline dangerous. Because everybody was worried about hallucinations. What happened if you have this AI listening in and then it transcribes, you know, something wrong?  Now, two years later, I think it’s mostly working. And in fact, it is not yet, you know, fully adopted. You’re right. But it is in production. It is used, you know, in many, many places. So this rate of progress is astounding because it wasn’t obvious that we would be able to overcome those obstacles of hallucination. It’s not to say that hallucinations are fully solved. In the case of the closed system, they are.   Now, I think more generally what’s going on in the background is that there is something that we, that certainly I, underestimated, which is this management overhead. So I think the reason why this is not adopted everywhere is really a training and teaching aspect. People need to be taught, like, those systems, how to interact with them.  And one example that I really like, a study that recently appeared where they tried to use ChatGPT for diagnosis and they were comparing doctors without and with ChatGPT (opens in new tab). And the amazing thing … so this was a set of cases where the accuracy of the doctors alone was around 75%. ChatGPT alone was 90%. So that’s already kind of mind blowing. But then the kicker is that doctors with ChatGPT was 80%.   Intelligence alone is not enough. It’s also how it’s presented, how you interact with it. And ChatGPT, it’s an amazing tool. Obviously, I absolutely love it. But it’s not … you don’t want a doctor to have to type in, you know, prompts and use it that way.  It should be, as Bill was saying, kind of running continuously in the background, sending you notifications. And you have to be really careful of the rate at which those notifications are being sent. Because if they are too frequent, then the doctor will learn to ignore them. So you have to … all of those things matter, in fact, at least as much as the level of intelligence of the machine.  LEE: One of the things I think about, Bill, in that scenario that you described, doctors do some thinking about the patient when they write the note. So, you know, I’m always a little uncertain whether it’s actually … you know, you wouldn’t necessarily want to fully automate this, I don’t think. Or at least there needs to be some prompt to the doctor to make sure that the doctor puts some thought into what happened in the encounter with the patient. Does that make sense to you at all?  GATES: At this stage, you know, I’d still put the onus on the doctor to write the conclusions and the summary and not delegate that.  The tradeoffs you make a little bit are somewhat dependent on the situation you’re in. If you’re in Africa, So, yes, the doctor’s still going to have to do a lot of work, but just the quality of letting the patient and the people around them interact and ask questions and have things explained, that alone is such a quality improvement. It’s mind blowing.   LEE: So since you mentioned, you know, Africa—and, of course, this touches on the mission and some of the priorities of the Gates Foundation and this idea of democratization of access to expert medical care—what’s the most interesting stuff going on right now? Are there people and organizations or technologies that are impressing you or that you’re tracking?  GATES: Yeah. So the Gates Foundation has given out a lot of grants to people in Africa doing education, agriculture but more healthcare examples than anything. And the way these things start off, they often start out either being patient-centric in a narrow situation, like, OK, I’m a pregnant woman; talk to me. Or, I have infectious disease symptoms; talk to me. Or they’re connected to a health worker where they’re helping that worker get their job done. And we have lots of pilots out, you know, in both of those cases.   The dream would be eventually to have the thing the patient consults be so broad that it’s like having a doctor available who understands the local things.   LEE: Right.   GATES: We’re not there yet. But over the next two or three years, you know, particularly given the worsening financial constraints against African health systems, where the withdrawal of money has been dramatic, you know, figuring out how to take this—what I sometimes call “free intelligence”—and build a quality health system around that, we will have to be more radical in low-income countries than any rich country is ever going to be.   LEE: Also, there’s maybe a different regulatory environment, so some of those things maybe are easier? Because right now, I think the world hasn’t figured out how to and whether to regulate, let’s say, an AI that might give a medical diagnosis or write a prescription for a medication.  BUBECK: Yeah. I think one issue with this, and it’s also slowing down the deployment of AI in healthcare more generally, is a lack of proper benchmark. Because, you know, you were mentioning the USMLE [United States Medical Licensing Examination], for example. That’s a great test to test human beings and their knowledge of healthcare and medicine. But it’s not a great test to give to an AI.  It’s not asking the right questions. So finding what are the right questions to test whether an AI system is ready to give diagnosis in a constrained setting, that’s a very, very important direction, which to my surprise, is not yet accelerating at the rate that I was hoping for.  LEE: OK, so that gives me an excuse to get more now into the core AI tech because something I’ve discussed with both of you is this issue of what are the right tests. And you both know the very first test I give to any new spin of an LLM is I present a patient, the results—a mythical patient—the results of my physical exam, my mythical physical exam. Maybe some results of some initial labs. And then I present or propose a differential diagnosis. And if you’re not in medicine, a differential diagnosis you can just think of as a prioritized list of the possible diagnoses that fit with all that data. And in that proposed differential, I always intentionally make two mistakes.  I make a textbook technical error in one of the possible elements of the differential diagnosis, and I have an error of omission. And, you know, I just want to know, does the LLM understand what I’m talking about? And all the good ones out there do now. But then I want to know, can it spot the errors? And then most importantly, is it willing to tell me I’m wrong, that I’ve made a mistake?   That last piece seems really hard for AI today. And so let me ask you first, Seb, because at the time of this taping, of course, there was a new spin of GPT-4o last week that became overly sycophantic. In other words, it was actually prone in that test of mine not only to not tell me I’m wrong, but it actually praised me for the creativity of my differential. [LAUGHTER] What’s up with that?  BUBECK: Yeah, I guess it’s a testament to the fact that training those models is still more of an art than a science. So it’s a difficult job. Just to be clear with the audience, we have rolled back that [LAUGHS] version of GPT-4o, so now we don’t have the sycophant version out there.  Yeah, no, it’s a really difficult question. It has to do … as you said, it’s very technical. It has to do with the post-training and how, like, where do you nudge the model? So, you know, there is this very classical by now technique called RLHF [reinforcement learning from human feedback], where you push the model in the direction of a certain reward model. So the reward model is just telling the model, you know, what behavior is good, what behavior is bad.  But this reward model is itself an LLM, and, you know, Bill was saying at the very beginning of the conversation that we don’t really understand how those LLMs deal with concepts like, you know, where is the capital of France located? Things like that. It is the same thing for this reward model. We don’t know why it says that it prefers one output to another, and whether this is correlated with some sycophancy is, you know, something that we discovered basically just now. That if you push too hard in optimization on this reward model, you will get a sycophant model.  So it’s kind of … what I’m trying to say is we became too good at what we were doing, and we ended up, in fact, in a trap of the reward model.  LEE: I mean, you do want … it’s a difficult balance because you do want models to follow your desires and …  BUBECK: It’s a very difficult, very difficult balance.  LEE: So this brings up then the following question for me, which is the extent to which we think we’ll need to have specially trained models for things. So let me start with you, Bill. Do you have a point of view on whether we will need to, you know, quote-unquote take AI models to med school? Have them specially trained? Like, if you were going to deploy something to give medical care in underserved parts of the world, do we need to do something special to create those models?  GATES: We certainly need to teach them the African languages and the unique dialects so that the multimedia interactions are very high quality. We certainly need to teach them the disease prevalence and unique disease patterns like, you know, neglected tropical diseases and malaria. So we need to gather a set of facts that somebody trying to go for a US customer base, you know, wouldn’t necessarily have that in there.  Those two things are actually very straightforward because the additional training time is small. I’d say for the next few years, we’ll also need to do reinforcement learning about the context of being a doctor and how important certain behaviors are. Humans learn over the course of their life to some degree that, I’m in a different context and the way I behave in terms of being willing to criticize or be nice, you know, how important is it? Who’s here? What’s my relationship to them?   Right now, these machines don’t have that broad social experience. And so if you know it’s going to be used for health things, a lot of reinforcement learning of the very best humans in that context would still be valuable. Eventually, the models will, having read all the literature of the world about good doctors, bad doctors, it’ll understand as soon as you say, “I want you to be a doctor diagnosing somebody.” All of the implicit reinforcement that fits that situation, you know, will be there. LEE: Yeah. GATES: And so I hope three years from now, we don’t have to do that reinforcement learning. But today, for any medical context, you would want a lot of data to reinforce tone, willingness to say things when, you know, there might be something significant at stake.  LEE: Yeah. So, you know, something Bill said, kind of, reminds me of another thing that I think we missed, which is, the context also … and the specialization also pertains to different, I guess, what we still call “modes,” although I don’t know if the idea of multimodal is the same as it was two years ago. But, you know, what do you make of all of the hubbub around—in fact, within Microsoft Research, this is a big deal, but I think we’re far from alone—you know, medical images and vision, video, proteins and molecules, cell, you know, cellular data and so on.  BUBECK: Yeah. OK. So there is a lot to say to everything … to the last, you know, couple of minutes. Maybe on the specialization aspect, you know, I think there is, hiding behind this, a really fundamental scientific question of whether eventually we have a singular AGI [artificial general intelligence] that kind of knows everything and you can just put, you know, explain your own context and it will just get it and understand everything.  That’s one vision. I have to say, I don’t particularly believe in this vision. In fact, we humans are not like that at all. I think, hopefully, we are general intelligences, yet we have to specialize a lot. And, you know, I did myself a lot of RL, reinforcement learning, on mathematics. Like, that’s what I did, you know, spent a lot of time doing that. And I didn’t improve on other aspects. You know, in fact, I probably degraded in other aspects. [LAUGHTER] So it’s … I think it’s an important example to have in mind.  LEE: I think I might disagree with you on that, though, because, like, doesn’t a model have to see both good science and bad science in order to be able to gain the ability to discern between the two?  BUBECK: Yeah, no, that absolutely. I think there is value in seeing the generality, in having a very broad base. But then you, kind of, specialize on verticals. And this is where also, you know, open-weights model, which we haven’t talked about yet, are really important because they allow you to provide this broad base to everyone. And then you can specialize on top of it.  LEE: So we have about three hours of stuff to talk about, but our time is actually running low. BUBECK: Yes, yes, yes.   LEE: So I think I want … there’s a more provocative question. It’s almost a silly question, but I need to ask it of the two of you, which is, is there a future, you know, where AI replaces doctors or replaces, you know, medical specialties that we have today? So what does the world look like, say, five years from now?  GATES: Well, it’s important to distinguish healthcare discovery activity from healthcare delivery activity. We focused mostly on delivery. I think it’s very much within the realm of possibility that the AI is not only accelerating healthcare discovery but substituting for a lot of the roles of, you know, I’m an organic chemist, or I run various types of assays. I can see those, which are, you know, testable-output-type jobs but with still very high value, I can see, you know, some replacement in those areas before the doctor.   The doctor, still understanding the human condition and long-term dialogues, you know, they’ve had a lifetime of reinforcement of that, particularly when you get into areas like mental health. So I wouldn’t say in five years, either people will choose to adopt it, but it will be profound that there’ll be this nearly free intelligence that can do follow-up, that can help you, you know, make sure you went through different possibilities.  And so I’d say, yes, we’ll have doctors, but I’d say healthcare will be massively transformed in its quality and in efficiency by AI in that time period.  LEE: Is there a comparison, useful comparison, say, between doctors and, say, programmers, computer programmers, or doctors and, I don’t know, lawyers?  GATES: Programming is another one that has, kind of, a mathematical correctness to it, you know, and so the objective function that you’re trying to reinforce to, as soon as you can understand the state machines, you can have something that’s “checkable”; that’s correct. So I think programming, you know, which is weird to say, that the machine will beat us at most programming tasks before we let it take over roles that have deep empathy, you know, physical presence and social understanding in them.  LEE: Yeah. By the way, you know, I fully expect in five years that AI will produce mathematical proofs that are checkable for validity, easily checkable, because they’ll be written in a proof-checking language like Lean or something but will be so complex that no human mathematician can understand them. I expect that to happen.   I can imagine in some fields, like cellular biology, we could have the same situation in the future because the molecular pathways, the chemistry, biochemistry of human cells or living cells is as complex as any mathematics, and so it seems possible that we may be in a state where in wet lab, we see, Oh yeah, this actually works, but no one can understand why.  BUBECK: Yeah, absolutely. I mean, I think I really agree with Bill’s distinction of the discovery and the delivery, and indeed, the discovery’s when you can check things, and at the end, there is an artifact that you can verify. You know, you can run the protocol in the wet lab and see [if you have] produced what you wanted. So I absolutely agree with that.   And in fact, you know, we don’t have to talk five years from now. I don’t know if you know, but just recently, there was a paper that was published on a scientific discovery using o3- mini (opens in new tab). So this is really amazing. And, you know, just very quickly, just so people know, it was about this statistical physics model, the frustrated Potts model, which has to do with coloring, and basically, the case of three colors, like, more than two colors was open for a long time, and o3 was able to reduce the case of three colors to two colors.   LEE: Yeah.  BUBECK: Which is just, like, astounding. And this is not … this is now. This is happening right now. So this is something that I personally didn’t expect it would happen so quickly, and it’s due to those reasoning models.   Now, on the delivery side, I would add something more to it for the reason why doctors and, in fact, lawyers and coders will remain for a long time, and it’s because we still don’t understand how those models generalize. Like, at the end of the day, we are not able to tell you when they are confronted with a really new, novel situation, whether they will work or not.  Nobody is able to give you that guarantee. And I think until we understand this generalization better, we’re not going to be willing to just let the system in the wild without human supervision.  LEE: But don’t human doctors, human specialists … so, for example, a cardiologist sees a patient in a certain way that a nephrologist …  BUBECK: Yeah. LEE: … or an endocrinologist might not. BUBECK: That’s right. But another cardiologist will understand and, kind of, expect a certain level of generalization from their peer. And this, we just don’t have it with AI models. Now, of course, you’re exactly right. That generalization is also hard for humans. Like, if you have a human trained for one task and you put them into another task, then you don’t … you often don’t know. LEE: OK. You know, the podcast is focused on what’s happened over the last two years. But now, I’d like one provocative prediction about what you think the world of AI and medicine is going to be at some point in the future. You pick your timeframe. I don’t care if it’s two years or 20 years from now, but, you know, what do you think will be different about AI in medicine in that future than today?  BUBECK: Yeah, I think the deployment is going to accelerate soon. Like, we’re really not missing very much. There is this enormous capability overhang. Like, even if progress completely stopped, with current systems, we can do a lot more than what we’re doing right now. So I think this will … this has to be realized, you know, sooner rather than later.  And I think it’s probably dependent on these benchmarks and proper evaluation and tying this with regulation. So these are things that take time in human society and for good reason. But now we already are at two years; you know, give it another two years and it should be really …   LEE: Will AI prescribe your medicines? Write your prescriptions?  BUBECK: I think yes. I think yes.  LEE: OK. Bill?  GATES: Well, I think the next two years, we’ll have massive pilots, and so the amount of use of the AI, still in a copilot-type mode, you know, we should get millions of patient visits, you know, both in general medicine and in the mental health side, as well. And I think that’s going to build up both the data and the confidence to give the AI some additional autonomy. You know, are you going to let it talk to you at night when you’re panicked about your mental health with some ability to escalate? And, you know, I’ve gone so far as to tell politicians with national health systems that if they deploy AI appropriately, that the quality of care, the overload of the doctors, the improvement in the economics will be enough that their voters will be stunned because they just don’t expect this, and, you know, they could be reelected [LAUGHTER] just on this one thing of fixing what is a very overloaded and economically challenged health system in these rich countries.  You know, my personal role is going to be to make sure that in the poorer countries, there isn’t some lag; in fact, in many cases, that we’ll be more aggressive because, you know, we’re comparing to having no access to doctors at all. And, you know, so I think whether it’s India or Africa, there’ll be lessons that are globally valuable because we need medical intelligence. And, you know, thank god AI is going to provide a lot of that.  LEE: Well, on that optimistic note, I think that’s a good way to end. Bill, Seb, really appreciate all of this.   I think the most fundamental prediction we made in the book is that AI would actually find its way into the practice of medicine, and I think that that at least has come true, maybe in different ways than we expected, but it’s come true, and I think it’ll only accelerate from here. So thanks again, both of you.  [TRANSITION MUSIC]  GATES: Yeah. Thanks, you guys.  BUBECK: Thank you, Peter. Thanks, Bill.  LEE: I just always feel such a sense of privilege to have a chance to interact and actually work with people like Bill and Sébastien.    With Bill, I’m always amazed at how practically minded he is. He’s really thinking about the nuts and bolts of what AI might be able to do for people, and his thoughts about underserved parts of the world, the idea that we might actually be able to empower people with access to expert medical knowledge, I think is both inspiring and amazing.   And then, Seb, Sébastien Bubeck, he’s just absolutely a brilliant mind. He has a really firm grip on the deep mathematics of artificial intelligence and brings that to bear in his research and development work. And where that mathematics takes him isn’t just into the nuts and bolts of algorithms but into philosophical questions about the nature of intelligence.   One of the things that Sébastien brought up was the state of evaluation of AI systems. And indeed, he was fairly critical in our conversation. But of course, the world of AI research and development is just moving so fast, and indeed, since we recorded our conversation, OpenAI, in fact, released a new evaluation metric that is directly relevant to medical applications, and that is something called HealthBench. And Microsoft Research also released a new evaluation approach or process called ADeLe.   HealthBench and ADeLe are examples of new approaches to evaluating AI models that are less about testing their knowledge and ability to pass multiple-choice exams and instead are evaluation approaches designed to assess how well AI models are able to complete tasks that actually arise every day in typical healthcare or biomedical research settings. These are examples of really important good work that speak to how well AI models work in the real world of healthcare and biomedical research and how well they can collaborate with human beings in those settings.  You know, I asked Bill and Seb to make some predictions about the future. You know, my own answer, I expect that we’re going to be able to use AI to change how we diagnose patients, change how we decide treatment options.   If you’re a doctor or a nurse and you encounter a patient, you’ll ask questions, do a physical exam, you know, call out for labs just like you do today, but then you’ll be able to engage with AI based on all of that data and just ask, you know, based on all the other people who have gone through the same experience, who have similar data, how were they diagnosed? How were they treated? What were their outcomes? And what does that mean for the patient I have right now? Some people call it the “patients like me” paradigm. And I think that’s going to become real because of AI within our lifetimes. That idea of really grounding the delivery in healthcare and medical practice through data and intelligence, I actually now don’t see any barriers to that future becoming real.  [THEME MUSIC]  I’d like to extend another big thank you to Bill and Sébastien for their time. And to our listeners, as always, it’s a pleasure to have you along for the ride. I hope you’ll join us for our remaining conversations, as well as a second coauthor roundtable with Carey and Zak.   Until next time.   [MUSIC FADES]
    0 Комментарии 0 Поделились
  • Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area

    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental, we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representativeat PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Managerposition, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salaryplus uncapped commissions =OTE annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators, including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings:EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage401matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5-
    #premier #truck #rental #inside #sales
    Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area
    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental, we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representativeat PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Managerposition, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salaryplus uncapped commissions =OTE annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators, including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings:EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage401matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5- #premier #truck #rental #inside #sales
    WEWORKREMOTELY.COM
    Premier Truck Rental: Inside Sales Representative - Remote Salt Lake Area
    Are you in search of a company that resonates with your proactive spirit and entrepreneurial mindset? Your search ends here with Premier Truck Rental! Company Overview At Premier Truck Rental (PTR), we provide customized commercial fleet rentals nationwide, helping businesses get the right trucks and equipment to get the job done. Headquartered in Fort Wayne, Indiana, PTR is a family-owned company built on a foundation of integrity, innovation, and exceptional service. We serve a wide range of industriesincluding construction, utilities, and infrastructureby delivering high-quality, ready-to-work trucks and trailers tailored to each customers needs. At PTR, we dont just rent truckswe partner with our customers to drive efficiency and success on every job site. Please keep reading Not sure if you meet every requirement? Thats okay! We encourage you to apply if youre passionate, hardworking, and eager to contribute. We know that diverse perspectives and experiences make us stronger, and we want you to be part of our journey. Inside Sales Representative (ISR) at PTR is a friendly, people-oriented, and persuasive steward of the sales process. This role will support our Territory Managers with their sales pipeline while also prospecting and cross-selling PTR products themselves. This support includes driving results by enrolling the commitment and buy-in of other internal departments to achieve sales initiatives. The Inside Sales Representative will also represent PTRs commitment to being our customers easy button by serving as the main point of contact. They will be the front-line hero by assisting them in making informed decisions, providing guidance on our rentals, and resolving any issues they might face. We are seeking someone eager to develop their sales skills and grow within our organization. This role is designed as a stepping stone to a Territory Sales Manager (TSM) position, providing hands-on experience with customer interactions, lead qualification, and sales process execution. Ideal candidates will demonstrate a strong drive for results, the ability to build relationships, and a proactive approach to learning and development. High-performing ISRs will have the opportunity to be mentored, trained, and considered for promotion into a TSM role as part of their career path at PTR. COMPENSATION This position offers a competitive compensation package of base salary ($50,000/yr) plus uncapped commissions =OTE $85,000 annually. RESPONSIBILITIES Offer top-notch customer service and respond with a sense of urgency for goal achievement in a fast-paced sales environment. Build a strong pipeline of customers by qualifying potential leads in your territory. This includes strategic prospecting and sourcing. Develop creative ways to engage and build rapport with prospective customers by pitching the Premier Truck Rental value proposition. Partner with assigned Territory Managers by assisting with scheduling customer visits, trade shows, new customer hand-offs, and any other travel requested. Facilitate in-person meetings and set appointments with prospective customers. Qualify and quote inquiries for your prospective territories both online and from the Territory Manager. Input data into the system with accuracy and follow up in a timely fashion. Facilitate the onboarding of new customers through the credit process. Drive collaboration between customers, Territory Managers, Logistics, and internal teams to coordinate On-Rent and Off-Rent notices with excellent attention to detail. Identify and arrange the swap of equipment from customers meeting the PTR de-fleeting criteria. Manage the sales tools to organize, compile, and analyze data with accuracy for a variety of activities and multiple projects occurring simultaneously.Building and developing a new 3-4 state territory! REQUIREMENTS MUST HAVE2+ years of strategic prospecting or account manager/sales experience; or an advanced degree or equivalent experience converting prospects into closed sales. Tech-forward approach to sales strategy. Excellent prospecting, follow-up, and follow-through skills. Committed to seeing deals through completion. Accountability and ownership of the sales process and a strong commitment to results. Comfortable with a job that has a variety of tasks and is dynamic and changing. Proactive prospecting skills and can overcome objections; driven to establish relationships with new customers. Ability to communicate in a clear, logical manner in formal and informal situations. Proficiency in CRMs and sales tracking systems Hunters mindsetsomeone who thrives on pursuing new business, driving outbound sales, and generating qualified opportunities. Prospecting: Going on LinkedIn, Looking at Competitor data, grabbing contacts for the TM, may use technology like Apollo and LinkedIn Sales Navigator Partner closely with the Territory Manager to ensure a unified approach in managing customer relationships, pipeline development, and revenue growth. Maintain clear and consistent communication to align on sales strategies, customer needs, and market opportunities, fostering a seamless and collaborative partnership with the Territory Manager. Consistently meet and exceed key performance indicators (KPIs), including rental revenue, upfit revenue, and conversion rates, by actively managing customer accounts and identifying growth opportunities. Support the saturation and maturation of the customer base through strategic outreach, relationship management, and alignment with the Territory Manager to drive long-term success. Remote in the United States with some travel to trade shows, quarterly travel up to a week at a time, and sales meetingsNICE TO HAVE Rental and/or sales experience in the industry. Proficiency in , Apollo.io , LinkedIn Sales Navigator, Power BI, MS Dynamics, Chat GPT. Established relationships within the marketplace or territory. Motivated to grow into outside territory management position with relocation On Target Earnings: ($85,000)EMPLOYEE BENEFITSWellness & Fitness: Take advantage of our on-site CrossFit-style gym, featuring a full-time personal trainer dedicated to helping you reach your fitness goals. Whether you're into group classes, virtual personal training, personalized workout plans, or nutrition coaching, weve got you covered!Exclusive Employee Perks: PTR Swag & a Uniform/Boot Allowance, On-site Micro-Markets stocked with snacks & essentials, discounts on phone plans, supplier vehicles, mobile detailing, tools, & equipmentand much more!Profit SharingYour Success, rewarded: At PTR, we believe in sharing success. Our Profit-SharingComprehensive BenefitsStarting Day One:Premium healthcare coverage (medical, dental, vision, mental health & virtual healthcare)401(k) matching & long-term financial planning Paid time off that lets you recharge Life, accidental death, and disability coverage Ongoing learning & development opportunitiesTraining, Growth & RecognitionWe partner with Predictive Index to better understand your strengths, ensuring tailored coaching, structured training, and career development. Performance and attitude evaluations every 6 months keep you on track for growth.Culture & ConnectionMore Than Just a JobAt PTR, we dont just build relationships with our customerswe build them with each other. Our tech-forward, highly collaborative culture is rooted in our core values. Connect and engage through:PTR Field Days & Team EventsThe Extra Mile Recognition ProgramPTR Text Alerts & Open CommunicationPremier Truck Rental Is an Equal Opportunity Employer We are an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. If you need support or accommodation due to a disability, contact us at PI6e547fa1c5-
    0 Комментарии 0 Поделились
  • The Download: gambling with humanity’s future, and the FDA under Trump

    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future

    Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story.

    —Bryan Gardiner

    This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands!

    Here’s what food and drug regulation might look like under the Trump administration

    Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them.

    Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI.

    —Jessica Hamzelou

    This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here.

    The must-reads

    I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology.

    1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically. 

    3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day

    “It kind of jams two years of work into two months.”

    —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states.

    One more thing

    The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years.

    But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story.

    —David Rotman

    We can still have nice things

    A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know.
    #download #gambling #with #humanitys #future
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality; establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates.+ Its core component has been springing small air leaks for months.+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid.2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA.+ Platforms’ relationships with protest activism has changed drastically.  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787.+ Why Google’s AI Overviews gets things wrong.4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models.+ The US is cracking down on Huawei’s ability to produce chips.+ What the US-China AI race overlooks.5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms.+ Here’s what we know about hurricanes and climate change.6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?!+ Nothing is safe from the creep of AI, not even playtime.+ OpenAI has ambitions to reach billions of users.7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC.+ How do you teach an AI model to give therapy?8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad.+ Google, Amazon and the problem with Big Tech’s climate claims.9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable.+ What is vibe coding, exactly?10 TikTok really loves hotdogs And who can blame it?Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day.+ If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why?+ Why do dads watch TV standing up? I need to know. #download #gambling #with #humanitys #future
    WWW.TECHNOLOGYREVIEW.COM
    The Download: gambling with humanity’s future, and the FDA under Trump
    This is today’s edition of The Download, our weekday newsletter that provides a daily dose of what’s going on in the world of technology.Tech billionaires are making a risky bet with humanity’s future Sam Altman, Jeff Bezos, Elon Musk, and others may have slightly different goals, but their grand visions for the next decade and beyond are remarkably similar.They include aligning AI with the interests of humanity; creating an artificial superintelligence that will solve all the world’s most pressing problems; merging with that superintelligence to achieve immortality (or something close to it); establishing a permanent, self-­sustaining colony on Mars; and, ultimately, spreading out across the cosmos.Three features play a central role with powering these visions, says Adam Becker, a science writer and astrophysicist: an unshakable certainty that technology can solve any problem, a belief in the necessity of perpetual growth, and a quasi-religious obsession with transcending our physical and biological limits.In his timely new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, Becker reveals how these fantastical visions conceal a darker agenda. Read the full story. —Bryan Gardiner This story is from the next print edition of MIT Technology Review, which explores power—who has it, and who wants it. It’s set to go live on Wednesday June 25, so subscribe & save 25% to read it and get a copy of the issue when it lands! Here’s what food and drug regulation might look like under the Trump administration Earlier this week, two new leaders of the US Food and Drug Administration published a list of priorities for the agency. Both Marty Makary and Vinay Prasad are controversial figures in the science community. They were generally highly respected academics until the covid pandemic, when their contrarian opinions on masking, vaccines, and lockdowns turned many of their colleagues off them. Given all this, along with recent mass firings of FDA employees, lots of people were pretty anxious to see what this list might include—and what we might expect the future of food and drug regulation in the US to look like. So let’s dive into the pair’s plans for new investigations, speedy approvals, and the “unleashing” of AI. —Jessica Hamzelou This article first appeared in The Checkup, MIT Technology Review’s weekly biotech newsletter. To receive it in your inbox every Thursday, and read articles like this first, sign up here. The must-reads I’ve combed the internet to find you today’s most fun/important/scary/fascinating stories about technology. 1 NASA is investigating leaks on the ISSIt’s postponed launching private astronauts to the station while it evaluates. (WP $)+ Its core component has been springing small air leaks for months. (Reuters)+ Meanwhile, this Chinese probe is en route to a near-Earth asteroid. (Wired $) 2 Undocumented migrants are using social media to warn of ICE raidsThe DIY networks are anonymously reporting police presences across LA. (Wired $)+ Platforms’ relationships with protest activism has changed drastically. (NY Mag $)  3 Google’s AI Overviews is hallucinating about the fatal Air India crashIt incorrectly stated that it involved an Airbus plane, not a Boeing 787. (Ars Technica)+ Why Google’s AI Overviews gets things wrong. (MIT Technology Review) 4 Chinese engineers are sneaking suitcases of hard drives into the countryTo covertly train advanced AI models. (WSJ $)+ The US is cracking down on Huawei’s ability to produce chips. (Bloomberg $)+ What the US-China AI race overlooks. (Rest of World) 5 The National Hurricane Center is joining forces with DeepMindIt’s the first time the center has used AI to predict nature’s worst storms. (NYT $)+ Here’s what we know about hurricanes and climate change. (MIT Technology Review) 6 OpenAI is working on a product with toymaker MattelAI-powered Barbies?! (FT $)+ Nothing is safe from the creep of AI, not even playtime. (LA Times $)+ OpenAI has ambitions to reach billions of users. (Bloomberg $) 7 Chatbots posing as licensed therapists may be breaking the lawDigital rights organizations have filed a complaint to the FTC. (404 Media)+ How do you teach an AI model to give therapy? (MIT Technology Review) 8 Major companies are abandoning their climate commitmentsBut some experts argue this may not be entirely bad. (Bloomberg $)+ Google, Amazon and the problem with Big Tech’s climate claims. (MIT Technology Review) 9 Vibe coding is shaking up software engineeringEven though AI-generated code is inherently unreliable. (Wired $)+ What is vibe coding, exactly? (MIT Technology Review) 10 TikTok really loves hotdogs And who can blame it? (Insider $) Quote of the day “It kind of jams two years of work into two months.” —Andrew Butcher, president of the Maine Connectivity Authority, tells Ars Technica why it’s so difficult to meet the Trump administration’s new plans to increase broadband access in certain states. One more thing The surprising barrier that keeps us from building the housing we needIt’s a tough time to try and buy a home in America. From the beginning of the pandemic to early 2024, US home prices rose by 47%. In large swaths of the country, buying a home is no longer a possibility even for those with middle-class incomes. For many, that marks the end of an American dream built around owning a house. Over the same time, rents have gone up 26%.The reason for the current rise in the cost of housing is clear to most economists: a lack of supply. Simply put, we don’t build enough houses and apartments, and we haven’t for years. But the reality is that even if we ease the endless permitting delays and begin cutting red tape, we will still be faced with a distressing fact: The construction industry is not very efficient when it comes to building stuff. Read the full story. —David Rotman We can still have nice things A place for comfort, fun and distraction to brighten up your day. (Got any ideas? Drop me a line or skeet ’em at me.) + If you’re one of the unlucky people who has triskaidekaphobia, look away now.+ 15-year old Nicholas is preparing to head from his home in the UK to Japan to become a professional sumo wrestler.+ Earlier this week, London played host to 20,000 women in bald caps. But why? ($)+ Why do dads watch TV standing up? I need to know.
    0 Комментарии 0 Поделились
  • How a planetarium show discovered a spiral at the edge of our solar system

    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system.

    “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist.

    Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years. 

    The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?” 

    To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data.

    “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says. 

    The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars.

    “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.”

    She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’” 

    While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves. 

    In each simulation, the spiral persisted.

    “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’” 

    An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system.

    “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.”

    “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.”

    It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.”

    The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems.

    Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”

     In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show.

    “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’

    “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'”

    “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds.

    The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.”

    By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies.

    To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX.

    The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.” 

    The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.”

    Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data.

    “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.”

    As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands.

    Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent. 

    More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud. 

    Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.” 

    The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud. 

    For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    #how #planetarium #show #discovered #spiral
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space ShowMore simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system.As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths.Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “ThenNeil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud, a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud“New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park. #how #planetarium #show #discovered #spiral
    WWW.FASTCOMPANY.COM
    How a planetarium show discovered a spiral at the edge of our solar system
    If you’ve ever flown through outer space, at least while watching a documentary or a science fiction film, you’ve seen how artists turn astronomical findings into stunning visuals. But in the process of visualizing data for their latest planetarium show, a production team at New York’s American Museum of Natural History made a surprising discovery of their own: a trillion-and-a-half mile long spiral of material drifting along the edge of our solar system. “So this is a really fun thing that happened,” says Jackie Faherty, the museum’s senior scientist. Last winter, Faherty and her colleagues were beneath the dome of the museum’s Hayden Planetarium, fine-tuning a scene that featured the Oort cloud, the big, thick bubble surrounding our Sun and planets that’s filled with ice and rock and other remnants from the solar system’s infancy. The Oort cloud begins far beyond Neptune, around one and a half light years from the Sun. It has never been directly observed; its existence is inferred from the behavior of long-period comets entering the inner solar system. The cloud is so expansive that the Voyager spacecraft, our most distant probes, would need another 250 years just to reach its inner boundary; to reach the other side, they would need about 30,000 years.  The 30-minute show, Encounters in the Milky Way, narrated by Pedro Pascal, guides audiences on a trip through the galaxy across billions of years. For a section about our nascent solar system, the writing team decided “there’s going to be a fly-by” of the Oort cloud, Faherty says. “But what does our Oort cloud look like?”  To find out, the museum consulted astronomers and turned to David Nesvorný, a scientist at the Southwest Research Institute in San Antonio. He provided his model of the millions of particles believed to make up the Oort cloud, based on extensive observational data. “Everybody said, go talk to Nesvorný. He’s got the best model,” says Faherty. And “everybody told us, ‘There’s structure in the model,’ so we were kind of set up to look for stuff,” she says.  The museum’s technical team began using Nesvorný’s model to simulate how the cloud evolved over time. Later, as the team projected versions of the fly-by scene into the dome, with the camera looking back at the Oort cloud, they saw a familiar shape, one that appears in galaxies, Saturn’s rings, and disks around young stars. “We’re flying away from the Oort cloud and out pops this spiral, a spiral shape to the outside of our solar system,” Faherty marveled. “A huge structure, millions and millions of particles.” She emailed Nesvorný to ask for “more particles,” with a render of the scene attached. “We noticed the spiral of course,” she wrote. “And then he writes me back: ‘what are you talking about, a spiral?’”  While fine-tuning a simulation of the Oort cloud, a vast expanse of ice material leftover from the birth of our Sun, the ‘Encounters in the Milky Way’ production team noticed a very clear shape: a structure made of billions of comets and shaped like a spiral-armed galaxy, seen here in a scene from the final Space Show (curving, dusty S-shape behind the Sun) [Image: © AMNH] More simulations ensued, this time on Pleiades, a powerful NASA supercomputer. In high-performance computer simulations spanning 4.6 billion years, starting from the Solar System’s earliest days, the researchers visualized how the initial icy and rocky ingredients of the Oort cloud began circling the Sun, in the elliptical orbits that are thought to give the cloud its rough disc shape. The simulations also incorporated the physics of the Sun’s gravitational pull, the influences from our Milky Way galaxy, and the movements of the comets themselves.  In each simulation, the spiral persisted. “No one has ever seen the Oort structure like that before,” says Faherty. Nesvorný “has a great quote about this: ‘The math was all there. We just needed the visuals.’”  An illustration of the Kuiper Belt and Oort Cloud in relation to our solar system. [Image: NASA] As the Oort cloud grew with the early solar system, Nesvorný and his colleagues hypothesize that the galactic tide, or the gravitational force from the Milky Way, disrupted the orbits of some comets. Although the Sun pulls these objects inward, the galaxy’s gravity appears to have twisted part of the Oort cloud outward, forming a spiral tilted roughly 30 degrees from the plane of the solar system. “As the galactic tide acts to decouple bodies from the scattered disk it creates a spiral structure in physical space that is roughly 15,000 astronomical units in length,” or around 1.4 trillion miles from one end to the other, the researchers write in a paper that was published in March in the Astrophysical Journal. “The spiral is long-lived and persists in the inner Oort Cloud to the present time.” “The physics makes sense,” says Faherty. “Scientists, we’re amazing at what we do, but it doesn’t mean we can see everything right away.” It helped that the team behind the space show was primed to look for something, says Carter Emmart, the museum’s director of astrovisualization and director of Encounters. Astronomers had described Nesvorný’s model as having “a structure,” which intrigued the team’s artists. “We were also looking for structure so that it wouldn’t just be sort of like a big blob,” he says. “Other models were also revealing this—but they just hadn’t been visualized.” The museum’s attempts to simulate nature date back to its first habitat dioramas in the early 1900s, which brought visitors to places that hadn’t yet been captured by color photos, TV, or the web. The planetarium, a night sky simulator for generations of would-be scientists and astronauts, got its start after financier Charles Hayden bought the museum its first Zeiss projector. The planetarium now boasts one of the world’s few Zeiss Mark IX systems. Still, these days the star projector is rarely used, Emmart says, now that fulldome laser projectors can turn the old static starfield into 3D video running at 60 frames per second. The Hayden boasts six custom-built Christie projectors, part of what the museum’s former president called “the most advanced planetarium ever attempted.”  In about 1.3 million years, the star system Gliese 710 is set to pass directly through our Oort Cloud, an event visualized in a dramatic scene in ‘Encounters in the Milky Way.’ During its flyby, our systems will swap icy comets, flinging some out on new paths. [Image: © AMNH] Emmart recalls how in 1998, when he and other museum leaders were imagining the future of space shows at the Hayden—now with the help of digital projectors and computer graphics—there were questions over how much space they could try to show. “We’re talking about these astronomical data sets we could plot to make the galaxy and the stars,” he says. “Of course, we knew that we would have this star projector, but we really wanted to emphasize astrophysics with this dome video system. I was drawing pictures of this just to get our heads around it and noting the tip of the solar system to the Milky Way is about 60 degrees. And I said, what are we gonna do when we get outside the Milky Way?’ “Then [planetarium’s director] Neil Degrasse Tyson “goes, ‘whoa, whoa, whoa, Carter, we have enough to do. And just plotting the Milky Way, that’s hard enough.’ And I said, ‘well, when we exit the Milky Way and we don’t see any other galaxies, that’s sort of like astronomy in 1920—we thought maybe the entire universe is just a Milky Way.'” “And that kind of led to a chaotic discussion about, well, what other data sets are there for this?” Emmart adds. The museum worked with astronomer Brent Tully, who had mapped 3500 galaxies beyond the Milky Way, in collaboration with the National Center for Super Computing Applications. “That was it,” he says, “and that seemed fantastical.” By the time the first planetarium show opened at the museum’s new Rose Center for Earth and Space in 2000, Tully had broadened his survey “to an amazing” 30,000 galaxies. The Sloan Digital Sky Survey followed—it’s now at data release 18—with six million galaxies. To build the map of the universe that underlies Encounters, the team also relied on data from the European Space Agency’s space observatory, Gaia. Launched in 2013 and powered down in March of this year, Gaia brought an unprecedented precision to our astronomical map, plotting the distance between 1.7 billion stars. To visualize and render the simulated data, Jon Parker, the museum’s lead technical director, relied on Houdini, a 3D animation tool by Toronto-based SideFX. The goal is immersion, “whether it’s in front of the buffalo downstairs, and seeing what those herds were like before we decimated them, to coming in this room and being teleported to space, with an accurate foundation in the science,” Emmart says. “But the art is important, because the art is the way to the soul.”  The museum, he adds, is “a testament to wonder. And I think wonder is a gateway to inspiration, and inspiration is a gateway to motivation.” Three-D visuals aren’t just powerful tools for communicating science, but increasingly crucial for science itself. Software like OpenSpace, an open source simulation tool developed by the museum, along with the growing availability of high-performance computing, are making it easier to build highly detailed visuals of ever larger and more complex collections of data. “Anytime we look, literally, from a different angle at catalogs of astronomical positions, simulations, or exploring the phase space of a complex data set, there is great potential to discover something new,” says Brian R. Kent, an astronomer and director of science communications at National Radio Astronomy Observatory. “There is also a wealth of astronomics tatical data in archives that can be reanalyzed in new ways, leading to new discoveries.” As the instruments grow in size and sophistication, so does the data, and the challenge of understanding it. Like all scientists, astronomers are facing a deluge of data, ranging from gamma rays and X-rays to ultraviolet, optical, infrared, and radio bands. Our Oort cloud (center), a shell of icy bodies that surrounds the solar system and extends one-and-a-half light years in every direction, is shown in this scene from ‘Encounters in the Milky Way’ along with the Oort clouds of neighboring stars. The more massive the star, the larger its Oort cloud [Image: © AMNH ] “New facilities like the Next Generation Very Large Array here at NRAO or the Vera Rubin Observatory and LSST survey project will generate large volumes of data, so astronomers have to get creative with how to analyze it,” says Kent.  More data—and new instruments—will also be needed to prove the spiral itself is actually there: there’s still no known way to even observe the Oort cloud.  Instead, the paper notes, the structure will have to be measured from “detection of a large number of objects” in the radius of the inner Oort cloud or from “thermal emission from small particles in the Oort spiral.”  The Vera C. Rubin Observatory, a powerful, U.S.-funded telescope that recently began operation in Chile, could possibly observe individual icy bodies within the cloud. But researchers expect the telescope will likely discover only dozens of these objects, maybe hundreds, not enough to meaningfully visualize any shapes in the Oort cloud.  For us, here and now, the 1.4 trillion mile-long spiral will remain confined to the inside of a dark dome across the street from Central Park.
    0 Комментарии 0 Поделились