• Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?

    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    June 13, 2025
    NatureSocial Issues
    Grace Ebert

    After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication.
    A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious.
    Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own.
    The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says.
    “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction
    Previous articleNext article
    #humpback #whales #are #approaching #people
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.”A composite image of at least one bubble ring from each interaction Previous articleNext article #humpback #whales #are #approaching #people
    WWW.THISISCOLOSSAL.COM
    Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say?
    A bubble ring created by a humpback whale named Thorn. Image © Dan Knaub, The Video Company Humpback Whales Are Approaching People to Blow Rings. What Are They Trying to Say? June 13, 2025 NatureSocial Issues Grace Ebert After the “orca uprising” captivated anti-capitalists around the world in 2023, scientists are intrigued by another form of marine mammal communication. A study released this month by the SETI Institute and the University of California at Davis dives into a newly documented phenomenon of humpback whales blowing bubble rings while interacting with humans. In contrast to the orcas’ aggressive behavior, researchers say the humpbacks appear to be friendly, relaxed, and even curious. Bubbles aren’t new to these aquatic giants, which typically release various shapes when corraling prey and courting mates. This study follows 12 distinct incidents involving 11 whales producing 39 rings, most of which have approached boats near Hawaii, the Dominican Republic, Mo’orea, and the U.S. Atlantic coast on their own. The impact of this research reaches far beyond the oceans, though. Deciphering these non-verbal messages could aid in potential extraterrestrial communication, as they can help to “develop filters that aid in parsing cosmic signals for signs of extraterrestrial life,” a statement says. “Because of current limitations on technology, an important assumption of the search for extraterrestrial intelligence is that extraterrestrial intelligence and life will be interested in making contact and so target human receivers,” said Dr. Laurance Doyle, a SETI Institute scientist who co-wrote the paper. “This important assumption is certainly supported by the independent evolution of curious behavior in humpback whales.” (via PetaPixel) A composite image of at least one bubble ring from each interaction Previous articleNext article
    0 Comentários 0 Compartilhamentos
  • Love, Death + Robots – Volume 4: Tim Miller (Creator & Director) & Jennifer Yuh Nelson (Supervising Director)

    Interviews

    Love, Death + Robots – Volume 4: Tim Miller& Jennifer Yuh NelsonBy Vincent Frei - 02/06/2025

    Earlier this year, Tim Miller spoke to us about his animated anthology Secret Level. Now, he returns to discuss the latest season of Love, Death + Robots.
    Jennifer Yuh Nelson talked about season two of Love, Death + Robots in 2021. She later worked on The Sea Beast, before returning once again to the anthology universe.
    What was your overall vision for the fourth season of Love, Death and Robots and how did it evolve from previous seasons?
    Tim Miller// We have the same strategy as every volume – we try to pick the best stories we know of and provide a mix that is hopefully appealing to everyone. There are a lot of variables to consider. Including genre, tone, and style of animation, such as stop motion, CG, and 2D.
    We try not to have two stories that are too similar. For example, if there’s already a military sci-fi story, we avoid selecting another one. We like to mix humor, horror, sci-fi, fantasy, and anything else that we think might be interesting from either a story or animation perspective.

    How did you approach the balance between experimenting with new styles and maintaining the signature identity of the show?
    TM // Honestly, we just try and follow our gut. What we think is interesting as filmmakers, animators, and storytellers will also be interesting to the animation community and fans alike. So, we keep an eye out for new voices, filmmakers, and new ways of doing things to keep things interesting.
    I’m not sure we have an identity of the show. In fact, I think if we did have an identity, it would be that we don’t have an identity… but we try and do whatever we think is interesting.
    Jennifer Yuh Nelson// The fortunate thing about LDR is that the signature itself is experimenting with new styles. The trick is finding new aggressively experimental styles that still communicate to a mass audience. The stories are key to that. If the story is engaging, even to an audience that doesn’t usually gravitate to animation, then you can make it looks as weird as you want.

    What are some of the key challenges you faced while overseeing this season and how do you tackle them?
    TM // This season, there was a lot going on in the animation community that created some challenges with getting work done, whether studios were too full or ceased to exist entirely. Everyone struggled with budgets. But I didn’t feel like it was a problem with our show but rather a problem with the entire industry. People were struggling.
    And then it’s just always difficult when your ambition is high, your budgets are reasonable but still challenging, and you have to wrangle hundreds of people to get on board with your vision.// These shows take a long time to make. R&D for a look that doesn’t exist can take a lot of trial and error. For example, Emily Dean, who directed Very Pulse of the Machine last season, did For He Can Creep this season. She had a cool angle of making her episode look like lithography. That was very very hard, but somehow Polygon, the studio that made both shorts, came through with it. And I think it turned out very well.

    Can you talk about how you selected the different animation studios for this season? What made you decide to work with the studios involved?// We’ve been very fortunate to have worked with amazing people and studios these last few seasons, so it made sense to float some stories by them again. But it really comes down to the stories, and how each leans towards a certain technique. For example, How Zeke Found Religion was holding a slot where we wanted something 2D. We went to Titmouse because they were great with pushing the boundaries of 2D animation, and they suggested Diego Porral as a director who could bring a modern edge.

    How do you ensure each studio’s unique visual style complements the story and tone of each episode?
    TM // I know this sounds a little mystical and I don’t mean it to be, but I think the story speaks to you about style. Some things just feels right, and you have an innate concept of what would be the best version of the story, whether it’s stop motion, CG, 2D animation, or even live action. When you start thinking about the story in a creative way, a style becomes apparent. Which is not to say there aren’t many ways to do things and tell stories, but we feel a best version becomes clear.// We do a lot of research, not just into what the studios have done before, but also into what they wish to do but haven’t had the chance to do. Often it’s just a matter of getting to know them and seeing if they have a philosophy of pushing for experimentation and risk. Then we try to support them as much as possible in their creative R&D.

    You both directed episodes for this season, what was that experience like? How did it differ from your work as overseeing directors?
    TM // For me, it’s really just trying to create the best story and I love working with the artists and trying to be open to what everybody brings to the table because everybody wants to do the best possible episode they can. I try and be open to letting people help carry that load. The best thing about being a director is that you get to pick and choose between all the great ideas that everybody has and shape the narrative by getting the benefit of everyone’s expertise and talent.// It’s a different mindset. As a Supervising Director, I help. As a Director, I do. On episodes I’m not directing, I am deciphering that director’s ambition and pushing for whatever is required to make that absolutely great. On an episode I’m directing, every choice and image has to go through my brain so it’s more a reflection of my personal taste. Plus I tend to storyboard a lot more on my own episodes since it’s a way for me to communicate to the crew. I storyboard a lot on other episodes, but mainly to help figure out problems here and there. It also doesn’t come out of those director’s budgets so the free storyboarding is often welcomed.

    How did you choose the episode you worked on yourself and what aspects of it made it resonate with you both?
    TM // In my case with “The Screaming of the Tyrannosaur,” it was really by default. I had written the episode for Zack Snyder but Zack was too busy, and by that time I’d already fallen in love with the story, so I figured, why not just do it myself? As for “Golgotha,” I always loved the story. It was very efficient and short, which is hard to find in a story – it felt like a full meal. It has a beginning, middle, and end and it resolves in a satisfying way. “Golgotha” had all of that, plus it was funny.// Spider Rose was on the story wall since the beginning. It was one of the “special” ones- very hard, ambitious, uncomfortable. Over the seasons we offered it to different directors and they veered away from it for one reason or another. But it glowed with a complexity that’s rare in a short story. I think that’s because it was written as an exploration for a far larger world that Bruce Sterling was developing. For me, it was the raw emotionality that drew me in. It’s how I understand how to communicate any story. And I love the way Spider Rose draws you in with emotion then shivs you with it.
    Were there any episodes in this season that particularly pushed the boundaries of what you had done before? How did that push happen?
    TM // I think “How Zeke got Religion” pushed the boundaries of 2D animation. The amount of detail and action that the guys at Titmouse were able to pull off was truly astonishing. Once again, Robert Valley outdid himself with 400 boys. The action scene at the end was one of my favorite pieces of animation in all of Love, Death, and Robots.// Golgotha, Tim’s episode is live action, which is a rarity for the show. There was one live action episode in season 1, but none since. It is primarily an animation series, but nowadays, the line is so fuzzy that it seemed to make sense.

    How do you balance creative freedom with the thematic unity required for a show like Love, Death, and Robots?
    TM // There isn’t really a thematic unity. We’re just trying to create the best version of each of the episodes. They don’t tie into each other, they don’t relate to each other, they aren’t supposed to be about either Love, Death or Robots – the title is a meant to be a “catchall” that could hold ANY story or visual art we thought might be cool. Hopefully, the overall assemblage feels like a balanced meal with a little bit of something for everybody. But thematically speaking, again, I think our theme is that there is no theme.// We try to set the foundation with a good story, based off the many short stories Tim has read over the years. Then the HOW of what that story becomes is the wooly Wild West. The directors and studios are fully encouraged to push all the boundaries of how to make these as innovative, impractical, and beautiful as they want. And, since each short is under 15 minutes, the studios we choose can be as experimental and scrappy as each story demands.

    Looking at the overall direction of the season, was there any unexpected moments or surprises that stood out to you during production?
    TM // Yeah, I think the color palette for “Zeke” was a shock to me in a wonderful way because it was completely unexpected and nothing I would ever do as a director but boy did I love it. And I think that “Can’t Stop” was an interesting addition. We wanted to do a music video from volume 1 onward, and this was the moment that we took to do it. I think it’s the greatest concert video ever made.// Why do we have so many cats and babies? I’ve no idea. But when we saw the first giant baby shots in 400 Boys, it was a rare joy. They walk like babies, real babies. And somehow that was both accurate and terrifying.

    Looking forward, where do you see the show heading in future seasons, are there any new themes or concepts you’d love to explore?
    TM // So many directors in the industry have asked if they could play in our sandbox, and I would like to expand our reach to get some established names. Not that we don’t want new talent – we will always want that – but it would be great to have some really fantastic directors who have accomplished big movies come and play with our stories. I also think there’s a version where we bring in some content that may have existed in other mediums like comic books and perhaps tell some larger stories that take more than one episode to tell.
    Truthfully, I’ve already got a some really interesting stories picked out for the next few seasons, — of course those will change as the show evolves, but they’re fascinating stories that explore the whole arc of history… past, present, and future and some of the big challenges that humanity is facing today. I’d be lying if I didn’t mention that many of them explore the future of what mankind will become with the advent of AI and how artificial intelligence and humanity’s future intersect.// Often themes only show up afterwards. There is a bit of a “herding cats” energy to the show that promises surprises in the production process. But the point of a show like this is that it is surprising. It has its own energy, and sometimes we just have to listen to it rather than dictate.

    If you had the opportunity to create any kind of story for Love, Death, and Robots, what would your dream narrative and what type of animation style would you envision for it?
    TM // Well, I have to say that I love high-end 3D animation, and that’s what Blur does for a reason. And secondly, I’d like to do a kind of story that could be live action and has some vast scope to it, but we choose to do it in animation because we get more value from using the techniques that animation brings. We can tell a bigger story, with more scope, and more action than we would using any other methodology…. and it competes favorably with live action in terms of the kind of audience that comes to watch it. Not just fans of animation, but fans of good cinema.// I’d love to see an anime episode, like a Tsutomu Nihei fight scene, or something by Katsuhiro Otomo.

    A big thanks for your time.
    WANT TO KNOW MORE?Blur Studio: Dedicated page about Love, Death + Robots: Volume 4 on Blur Studio website.
    © Vincent Frei – The Art of VFX – 2025
    #love #death #robots #volume #tim
    Love, Death + Robots – Volume 4: Tim Miller (Creator & Director) & Jennifer Yuh Nelson (Supervising Director)
    Interviews Love, Death + Robots – Volume 4: Tim Miller& Jennifer Yuh NelsonBy Vincent Frei - 02/06/2025 Earlier this year, Tim Miller spoke to us about his animated anthology Secret Level. Now, he returns to discuss the latest season of Love, Death + Robots. Jennifer Yuh Nelson talked about season two of Love, Death + Robots in 2021. She later worked on The Sea Beast, before returning once again to the anthology universe. What was your overall vision for the fourth season of Love, Death and Robots and how did it evolve from previous seasons? Tim Miller// We have the same strategy as every volume – we try to pick the best stories we know of and provide a mix that is hopefully appealing to everyone. There are a lot of variables to consider. Including genre, tone, and style of animation, such as stop motion, CG, and 2D. We try not to have two stories that are too similar. For example, if there’s already a military sci-fi story, we avoid selecting another one. We like to mix humor, horror, sci-fi, fantasy, and anything else that we think might be interesting from either a story or animation perspective. How did you approach the balance between experimenting with new styles and maintaining the signature identity of the show? TM // Honestly, we just try and follow our gut. What we think is interesting as filmmakers, animators, and storytellers will also be interesting to the animation community and fans alike. So, we keep an eye out for new voices, filmmakers, and new ways of doing things to keep things interesting. I’m not sure we have an identity of the show. In fact, I think if we did have an identity, it would be that we don’t have an identity… but we try and do whatever we think is interesting. Jennifer Yuh Nelson// The fortunate thing about LDR is that the signature itself is experimenting with new styles. The trick is finding new aggressively experimental styles that still communicate to a mass audience. The stories are key to that. If the story is engaging, even to an audience that doesn’t usually gravitate to animation, then you can make it looks as weird as you want. What are some of the key challenges you faced while overseeing this season and how do you tackle them? TM // This season, there was a lot going on in the animation community that created some challenges with getting work done, whether studios were too full or ceased to exist entirely. Everyone struggled with budgets. But I didn’t feel like it was a problem with our show but rather a problem with the entire industry. People were struggling. And then it’s just always difficult when your ambition is high, your budgets are reasonable but still challenging, and you have to wrangle hundreds of people to get on board with your vision.// These shows take a long time to make. R&D for a look that doesn’t exist can take a lot of trial and error. For example, Emily Dean, who directed Very Pulse of the Machine last season, did For He Can Creep this season. She had a cool angle of making her episode look like lithography. That was very very hard, but somehow Polygon, the studio that made both shorts, came through with it. And I think it turned out very well. Can you talk about how you selected the different animation studios for this season? What made you decide to work with the studios involved?// We’ve been very fortunate to have worked with amazing people and studios these last few seasons, so it made sense to float some stories by them again. But it really comes down to the stories, and how each leans towards a certain technique. For example, How Zeke Found Religion was holding a slot where we wanted something 2D. We went to Titmouse because they were great with pushing the boundaries of 2D animation, and they suggested Diego Porral as a director who could bring a modern edge. How do you ensure each studio’s unique visual style complements the story and tone of each episode? TM // I know this sounds a little mystical and I don’t mean it to be, but I think the story speaks to you about style. Some things just feels right, and you have an innate concept of what would be the best version of the story, whether it’s stop motion, CG, 2D animation, or even live action. When you start thinking about the story in a creative way, a style becomes apparent. Which is not to say there aren’t many ways to do things and tell stories, but we feel a best version becomes clear.// We do a lot of research, not just into what the studios have done before, but also into what they wish to do but haven’t had the chance to do. Often it’s just a matter of getting to know them and seeing if they have a philosophy of pushing for experimentation and risk. Then we try to support them as much as possible in their creative R&D. You both directed episodes for this season, what was that experience like? How did it differ from your work as overseeing directors? TM // For me, it’s really just trying to create the best story and I love working with the artists and trying to be open to what everybody brings to the table because everybody wants to do the best possible episode they can. I try and be open to letting people help carry that load. The best thing about being a director is that you get to pick and choose between all the great ideas that everybody has and shape the narrative by getting the benefit of everyone’s expertise and talent.// It’s a different mindset. As a Supervising Director, I help. As a Director, I do. On episodes I’m not directing, I am deciphering that director’s ambition and pushing for whatever is required to make that absolutely great. On an episode I’m directing, every choice and image has to go through my brain so it’s more a reflection of my personal taste. Plus I tend to storyboard a lot more on my own episodes since it’s a way for me to communicate to the crew. I storyboard a lot on other episodes, but mainly to help figure out problems here and there. It also doesn’t come out of those director’s budgets so the free storyboarding is often welcomed. How did you choose the episode you worked on yourself and what aspects of it made it resonate with you both? TM // In my case with “The Screaming of the Tyrannosaur,” it was really by default. I had written the episode for Zack Snyder but Zack was too busy, and by that time I’d already fallen in love with the story, so I figured, why not just do it myself? As for “Golgotha,” I always loved the story. It was very efficient and short, which is hard to find in a story – it felt like a full meal. It has a beginning, middle, and end and it resolves in a satisfying way. “Golgotha” had all of that, plus it was funny.// Spider Rose was on the story wall since the beginning. It was one of the “special” ones- very hard, ambitious, uncomfortable. Over the seasons we offered it to different directors and they veered away from it for one reason or another. But it glowed with a complexity that’s rare in a short story. I think that’s because it was written as an exploration for a far larger world that Bruce Sterling was developing. For me, it was the raw emotionality that drew me in. It’s how I understand how to communicate any story. And I love the way Spider Rose draws you in with emotion then shivs you with it. Were there any episodes in this season that particularly pushed the boundaries of what you had done before? How did that push happen? TM // I think “How Zeke got Religion” pushed the boundaries of 2D animation. The amount of detail and action that the guys at Titmouse were able to pull off was truly astonishing. Once again, Robert Valley outdid himself with 400 boys. The action scene at the end was one of my favorite pieces of animation in all of Love, Death, and Robots.// Golgotha, Tim’s episode is live action, which is a rarity for the show. There was one live action episode in season 1, but none since. It is primarily an animation series, but nowadays, the line is so fuzzy that it seemed to make sense. How do you balance creative freedom with the thematic unity required for a show like Love, Death, and Robots? TM // There isn’t really a thematic unity. We’re just trying to create the best version of each of the episodes. They don’t tie into each other, they don’t relate to each other, they aren’t supposed to be about either Love, Death or Robots – the title is a meant to be a “catchall” that could hold ANY story or visual art we thought might be cool. Hopefully, the overall assemblage feels like a balanced meal with a little bit of something for everybody. But thematically speaking, again, I think our theme is that there is no theme.// We try to set the foundation with a good story, based off the many short stories Tim has read over the years. Then the HOW of what that story becomes is the wooly Wild West. The directors and studios are fully encouraged to push all the boundaries of how to make these as innovative, impractical, and beautiful as they want. And, since each short is under 15 minutes, the studios we choose can be as experimental and scrappy as each story demands. Looking at the overall direction of the season, was there any unexpected moments or surprises that stood out to you during production? TM // Yeah, I think the color palette for “Zeke” was a shock to me in a wonderful way because it was completely unexpected and nothing I would ever do as a director but boy did I love it. And I think that “Can’t Stop” was an interesting addition. We wanted to do a music video from volume 1 onward, and this was the moment that we took to do it. I think it’s the greatest concert video ever made.// Why do we have so many cats and babies? I’ve no idea. But when we saw the first giant baby shots in 400 Boys, it was a rare joy. They walk like babies, real babies. And somehow that was both accurate and terrifying. Looking forward, where do you see the show heading in future seasons, are there any new themes or concepts you’d love to explore? TM // So many directors in the industry have asked if they could play in our sandbox, and I would like to expand our reach to get some established names. Not that we don’t want new talent – we will always want that – but it would be great to have some really fantastic directors who have accomplished big movies come and play with our stories. I also think there’s a version where we bring in some content that may have existed in other mediums like comic books and perhaps tell some larger stories that take more than one episode to tell. Truthfully, I’ve already got a some really interesting stories picked out for the next few seasons, — of course those will change as the show evolves, but they’re fascinating stories that explore the whole arc of history… past, present, and future and some of the big challenges that humanity is facing today. I’d be lying if I didn’t mention that many of them explore the future of what mankind will become with the advent of AI and how artificial intelligence and humanity’s future intersect.// Often themes only show up afterwards. There is a bit of a “herding cats” energy to the show that promises surprises in the production process. But the point of a show like this is that it is surprising. It has its own energy, and sometimes we just have to listen to it rather than dictate. If you had the opportunity to create any kind of story for Love, Death, and Robots, what would your dream narrative and what type of animation style would you envision for it? TM // Well, I have to say that I love high-end 3D animation, and that’s what Blur does for a reason. And secondly, I’d like to do a kind of story that could be live action and has some vast scope to it, but we choose to do it in animation because we get more value from using the techniques that animation brings. We can tell a bigger story, with more scope, and more action than we would using any other methodology…. and it competes favorably with live action in terms of the kind of audience that comes to watch it. Not just fans of animation, but fans of good cinema.// I’d love to see an anime episode, like a Tsutomu Nihei fight scene, or something by Katsuhiro Otomo. A big thanks for your time. WANT TO KNOW MORE?Blur Studio: Dedicated page about Love, Death + Robots: Volume 4 on Blur Studio website. © Vincent Frei – The Art of VFX – 2025 #love #death #robots #volume #tim
    WWW.ARTOFVFX.COM
    Love, Death + Robots – Volume 4: Tim Miller (Creator & Director) & Jennifer Yuh Nelson (Supervising Director)
    Interviews Love, Death + Robots – Volume 4: Tim Miller (Creator & Director) & Jennifer Yuh Nelson (Supervising Director) By Vincent Frei - 02/06/2025 Earlier this year, Tim Miller spoke to us about his animated anthology Secret Level. Now, he returns to discuss the latest season of Love, Death + Robots. Jennifer Yuh Nelson talked about season two of Love, Death + Robots in 2021. She later worked on The Sea Beast, before returning once again to the anthology universe. What was your overall vision for the fourth season of Love, Death and Robots and how did it evolve from previous seasons? Tim Miller (TM) // We have the same strategy as every volume – we try to pick the best stories we know of and provide a mix that is hopefully appealing to everyone. There are a lot of variables to consider. Including genre, tone, and style of animation, such as stop motion, CG, and 2D. We try not to have two stories that are too similar. For example, if there’s already a military sci-fi story, we avoid selecting another one. We like to mix humor, horror, sci-fi, fantasy, and anything else that we think might be interesting from either a story or animation perspective. How did you approach the balance between experimenting with new styles and maintaining the signature identity of the show? TM // Honestly, we just try and follow our gut. What we think is interesting as filmmakers, animators, and storytellers will also be interesting to the animation community and fans alike. So, we keep an eye out for new voices, filmmakers, and new ways of doing things to keep things interesting. I’m not sure we have an identity of the show. In fact, I think if we did have an identity, it would be that we don’t have an identity… but we try and do whatever we think is interesting. Jennifer Yuh Nelson (JYN) // The fortunate thing about LDR is that the signature itself is experimenting with new styles. The trick is finding new aggressively experimental styles that still communicate to a mass audience. The stories are key to that. If the story is engaging, even to an audience that doesn’t usually gravitate to animation, then you can make it looks as weird as you want. What are some of the key challenges you faced while overseeing this season and how do you tackle them? TM // This season, there was a lot going on in the animation community that created some challenges with getting work done, whether studios were too full or ceased to exist entirely. Everyone struggled with budgets. But I didn’t feel like it was a problem with our show but rather a problem with the entire industry. People were struggling. And then it’s just always difficult when your ambition is high, your budgets are reasonable but still challenging, and you have to wrangle hundreds of people to get on board with your vision. (JYN) // These shows take a long time to make. R&D for a look that doesn’t exist can take a lot of trial and error. For example, Emily Dean, who directed Very Pulse of the Machine last season, did For He Can Creep this season. She had a cool angle of making her episode look like lithography. That was very very hard, but somehow Polygon, the studio that made both shorts, came through with it. And I think it turned out very well. Can you talk about how you selected the different animation studios for this season? What made you decide to work with the studios involved? (JYN) // We’ve been very fortunate to have worked with amazing people and studios these last few seasons, so it made sense to float some stories by them again. But it really comes down to the stories, and how each leans towards a certain technique. For example, How Zeke Found Religion was holding a slot where we wanted something 2D. We went to Titmouse because they were great with pushing the boundaries of 2D animation, and they suggested Diego Porral as a director who could bring a modern edge. How do you ensure each studio’s unique visual style complements the story and tone of each episode? TM // I know this sounds a little mystical and I don’t mean it to be, but I think the story speaks to you about style. Some things just feels right, and you have an innate concept of what would be the best version of the story, whether it’s stop motion, CG, 2D animation, or even live action. When you start thinking about the story in a creative way, a style becomes apparent. Which is not to say there aren’t many ways to do things and tell stories, but we feel a best version becomes clear. (JYN) // We do a lot of research, not just into what the studios have done before, but also into what they wish to do but haven’t had the chance to do. Often it’s just a matter of getting to know them and seeing if they have a philosophy of pushing for experimentation and risk. Then we try to support them as much as possible in their creative R&D. You both directed episodes for this season, what was that experience like? How did it differ from your work as overseeing directors? TM // For me, it’s really just trying to create the best story and I love working with the artists and trying to be open to what everybody brings to the table because everybody wants to do the best possible episode they can. I try and be open to letting people help carry that load. The best thing about being a director is that you get to pick and choose between all the great ideas that everybody has and shape the narrative by getting the benefit of everyone’s expertise and talent. (JYN) // It’s a different mindset. As a Supervising Director, I help. As a Director, I do. On episodes I’m not directing, I am deciphering that director’s ambition and pushing for whatever is required to make that absolutely great. On an episode I’m directing, every choice and image has to go through my brain so it’s more a reflection of my personal taste. Plus I tend to storyboard a lot more on my own episodes since it’s a way for me to communicate to the crew. I storyboard a lot on other episodes, but mainly to help figure out problems here and there. It also doesn’t come out of those director’s budgets so the free storyboarding is often welcomed. How did you choose the episode you worked on yourself and what aspects of it made it resonate with you both? TM // In my case with “The Screaming of the Tyrannosaur,” it was really by default. I had written the episode for Zack Snyder but Zack was too busy, and by that time I’d already fallen in love with the story, so I figured, why not just do it myself? As for “Golgotha,” I always loved the story. It was very efficient and short, which is hard to find in a story – it felt like a full meal. It has a beginning, middle, and end and it resolves in a satisfying way. “Golgotha” had all of that, plus it was funny. (JYN) // Spider Rose was on the story wall since the beginning. It was one of the “special” ones- very hard, ambitious, uncomfortable. Over the seasons we offered it to different directors and they veered away from it for one reason or another. But it glowed with a complexity that’s rare in a short story. I think that’s because it was written as an exploration for a far larger world that Bruce Sterling was developing. For me, it was the raw emotionality that drew me in. It’s how I understand how to communicate any story. And I love the way Spider Rose draws you in with emotion then shivs you with it. Were there any episodes in this season that particularly pushed the boundaries of what you had done before? How did that push happen? TM // I think “How Zeke got Religion” pushed the boundaries of 2D animation. The amount of detail and action that the guys at Titmouse were able to pull off was truly astonishing. Once again, Robert Valley outdid himself with 400 boys. The action scene at the end was one of my favorite pieces of animation in all of Love, Death, and Robots. (JYN) // Golgotha, Tim’s episode is live action, which is a rarity for the show. There was one live action episode in season 1, but none since. It is primarily an animation series, but nowadays, the line is so fuzzy that it seemed to make sense. How do you balance creative freedom with the thematic unity required for a show like Love, Death, and Robots? TM // There isn’t really a thematic unity. We’re just trying to create the best version of each of the episodes. They don’t tie into each other, they don’t relate to each other, they aren’t supposed to be about either Love, Death or Robots – the title is a meant to be a “catchall” that could hold ANY story or visual art we thought might be cool. Hopefully, the overall assemblage feels like a balanced meal with a little bit of something for everybody. But thematically speaking, again, I think our theme is that there is no theme. (JYN) // We try to set the foundation with a good story, based off the many short stories Tim has read over the years. Then the HOW of what that story becomes is the wooly Wild West. The directors and studios are fully encouraged to push all the boundaries of how to make these as innovative, impractical, and beautiful as they want. And, since each short is under 15 minutes, the studios we choose can be as experimental and scrappy as each story demands. Looking at the overall direction of the season, was there any unexpected moments or surprises that stood out to you during production? TM // Yeah, I think the color palette for “Zeke” was a shock to me in a wonderful way because it was completely unexpected and nothing I would ever do as a director but boy did I love it. And I think that “Can’t Stop” was an interesting addition. We wanted to do a music video from volume 1 onward, and this was the moment that we took to do it. I think it’s the greatest concert video ever made. (JYN) // Why do we have so many cats and babies? I’ve no idea. But when we saw the first giant baby shots in 400 Boys, it was a rare joy. They walk like babies, real babies. And somehow that was both accurate and terrifying. Looking forward, where do you see the show heading in future seasons, are there any new themes or concepts you’d love to explore? TM // So many directors in the industry have asked if they could play in our sandbox, and I would like to expand our reach to get some established names. Not that we don’t want new talent – we will always want that – but it would be great to have some really fantastic directors who have accomplished big movies come and play with our stories. I also think there’s a version where we bring in some content that may have existed in other mediums like comic books and perhaps tell some larger stories that take more than one episode to tell. Truthfully, I’ve already got a some really interesting stories picked out for the next few seasons, — of course those will change as the show evolves, but they’re fascinating stories that explore the whole arc of history… past, present, and future and some of the big challenges that humanity is facing today. I’d be lying if I didn’t mention that many of them explore the future of what mankind will become with the advent of AI and how artificial intelligence and humanity’s future intersect. (JYN) // Often themes only show up afterwards. There is a bit of a “herding cats” energy to the show that promises surprises in the production process. But the point of a show like this is that it is surprising. It has its own energy, and sometimes we just have to listen to it rather than dictate. If you had the opportunity to create any kind of story for Love, Death, and Robots, what would your dream narrative and what type of animation style would you envision for it? TM // Well, I have to say that I love high-end 3D animation, and that’s what Blur does for a reason. And secondly, I’d like to do a kind of story that could be live action and has some vast scope to it, but we choose to do it in animation because we get more value from using the techniques that animation brings. We can tell a bigger story, with more scope, and more action than we would using any other methodology…. and it competes favorably with live action in terms of the kind of audience that comes to watch it. Not just fans of animation, but fans of good cinema. (JYN) // I’d love to see an anime episode, like a Tsutomu Nihei fight scene, or something by Katsuhiro Otomo. A big thanks for your time. WANT TO KNOW MORE?Blur Studio: Dedicated page about Love, Death + Robots: Volume 4 on Blur Studio website. © Vincent Frei – The Art of VFX – 2025
    0 Comentários 0 Compartilhamentos
  • AI Is Deciphering Animal Speech. Should We Try to Talk Back?

    By

    Isaac Schultz

    Published May 17, 2025

    |

    Comments|

    Scientists are using AI to decipher animal communication, creating some ethical conundrums. © Gizmodo [Illustration: St. Lumbroso, Photos: TatianaKim,Gulf MG/Shutterstock)

    Chirps, trills, growls, howls, squawks. Animals converse in all kinds of ways, yet humankind has only scratched the surface of how they communicate with each other and the rest of the living world. Our species has trained some animals—and if you ask cats, animals have trained us, too—but we’ve yet to truly crack the code on interspecies communication.

    Increasingly, animal researchers are deploying artificial intelligence to accelerate our investigations of animal communication—both within species and between branches on the tree of life. As scientists chip away at the complex communication systems of animals, they move closer to understanding what creatures are saying—and maybe even how to talk back. But as we try to bridge the linguistic gap between humans and animals, some experts are raising valid concerns about whether such capabilities are appropriate—or whether we should even attempt to communicate with animals at all. Using AI to untangle animal language Towards the front of the pack—or should I say pod?—is Project CETI, which has used machine learning to analyze more than 8,000 sperm whale “codas”—structured click patterns recorded by the Dominica Sperm Whale Project. Researchers uncovered contextual and combinatorial structures in the whales’ clicks, naming features like “rubato” and “ornamentation” to describe how whales subtly adjust their vocalizations during conversation. These patterns helped the team create a kind of phonetic alphabet for the animals—an expressive, structured system that may not be language as we know it but reveals a level of complexity that researchers weren’t previously aware of. Project CETI is also working on ethical guidelines for the technology, a critical goal given the risks of using AI to “talk” to the animals.

    Meanwhile, Google and the Wild Dolphin Project recently introduced DolphinGemma, a large language modeltrained on 40 years of dolphin vocalizations. Just as ChatGPT is an LLM for human inputs—taking visual information like research papers and images and producing responses to relevant queries—DolphinGemma intakes dolphin sound data and predicts what vocalization comes next. DolphinGemma can even generate dolphin-like audio, and the researchers’ prototype two-way system, Cetacean Hearing Augmentation Telemetry, uses a smartphone-based interface that dolphins employ to request items like scarves or seagrass—potentially laying the groundwork for future interspecies dialogue. “DolphinGemma is being used in the field this season to improve our real-time sound recognition in the CHAT system,” said Denise Herzing, founder and director of the Wild Dolphin Project, which spearheaded the development of DolphinGemma in collaboration with researchers at Google DeepMind, in an email to Gizmodo. “This fall we will spend time ingesting known dolphin vocalizations and let Gemma show us any repeatable patterns they find,” such as vocalizations used in courtship and mother-calf discipline. In this way, Herzing added, the AI applications are two-fold: Researchers can use it both to explore dolphins’ natural sounds and to better understand the animals’ responses to human mimicking of dolphin sounds, which are synthetically produced by the AI CHAT system.

    Expanding the animal AI toolkit Outside the ocean, researchers are finding that human speech models can be repurposed to decode terrestrial animal signals, too. A University of Michigan-led team used Wav2Vec2—a speech recognition model trained on human voices—to identify dogs’ emotions, genders, breeds, and even individual identities based on their barks. The pre-trained human model outperformed a version trained solely on dog data, suggesting that human language model architectures could be surprisingly effective in decoding animal communication. Of course, we need to consider the different levels of sophistication these AI models are targeting. Determining whether a dog’s bark is aggressive or playful, or whether it’s male or female—these are perhaps understandably easier for a model to determine than, say, the nuanced meaning encoded in sperm whale phonetics. Nevertheless, each study inches scientists closer to understanding how AI tools, as they currently exist, can be best applied to such an expansive field—and gives the AI a chance to train itself to become a more useful part of the researcher’s toolkit.

    And even cats—often seen as aloof—appear to be more communicative than they let on. In a 2022 study out of Paris Nanterre University, cats showed clear signs of recognizing their owner’s voice, but beyond that, the felines responded more intensely when spoken to directly in “cat talk.” That suggests cats not only pay attention to what we say, but also how we say it—especially when it comes from someone they know. Earlier this month, a pair of cuttlefish researchers found evidence that the animals have a set of four “waves,” or physical gestures, that they make to one another, as well as to human playback of cuttlefish waves. The group plans to apply an algorithm to categorize the types of waves, automatically track the creatures’ movements, and understand the contexts in which the animals express themselves more rapidly.

    Private companiesare also getting in on the act. Last week, China’s largest search engine, Baidu, filed a patent with the country’s IP administration proposing to translate animalvocalizations into human language. The quick and dirty on the tech is that it would intake a trove of data from your kitty, and then use an AI model to analyze the data, determine the animal’s emotional state, and output the apparent human language message your pet was trying to convey. A universal translator for animals? Together, these studies represent a major shift in how scientists are approaching animal communication. Rather than starting from scratch, research teams are building tools and models designed for humans—and making advances that would have taken much longer otherwise. The end goal couldbe a kind of Rosetta Stone for the animal kingdom, powered by AI.

    “We’ve gotten really good at analyzing human language just in the last five years, and we’re beginning to perfect this practice of transferring models trained on one dataset and applying them to new data,” said Sara Keen, a behavioral ecologist and electrical engineer at the Earth Species Project, in a video call with Gizmodo. The Earth Species Project plans to launch its flagship audio-language model for animal sounds, NatureLM, this year, and a demo for NatureLM-audio is already live. With input data from across the tree of life—as well as human speech, environmental sounds, and even music detection—the model aims to become a converter of human speech into animal analogues. The model “shows promising domain transfer from human speech to animal communication,” the project states, “supporting our hypothesis that shared representations in AI can help decode animal languages.” “A big part of our work really is trying to change the way people think about our place in the world,” Keen added. “We’re making cool discoveries about animal communication, but ultimately we’re finding that other species are just as complicated and nuanced as we are. And that revelation is pretty exciting.”

    The ethical dilemma Indeed, researchers generally agree on the promise of AI-based tools for improving the collection and interpretation of animal communication data. But some feel that there’s a breakdown in communication between that scholarly familiarity and the public’s perception of how these tools can be applied. “I think there’s currently a lot of misunderstanding in the coverage of this topic—that somehow machine learning can create this contextual knowledge out of nothing. That so long as you have thousands of hours of audio recordings, somehow some magic machine learning black box can squeeze meaning out of that,” said Christian Rutz, an expert in animal behavior and cognition and founding president of International Bio-Logging Society, in a video call with Gizmodo. “That’s not going to happen.” “Meaning comes through the contextual annotation and this is where I think it’s really important for this field as a whole, in this period of excitement and enthusiasm, to not forget that this annotation comes from basic behavioral ecology and natural history expertise,” Rutz added. In other words, let’s not put the horse before the cart, especially since the cart—in this case—is what’s powering the horse. But with great power…you know the cliché. Essentially, how can humans develop and apply these technologies in a way that is both scientifically illuminating and minimizes harm or disruption to its animal subjects? Experts have put forward ethical standards and guardrails for using the technologies that prioritize the welfare of creatures as we get closer to—well, wherever the technology is going.

    As AI advances, conversations about animal rights will have to evolve. In the future, animals could become more active participants in those conversations—a notion that legal experts are exploring as a thought exercise, but one that could someday become reality. “What we desperately need—apart from advancing the machine learning side—is to forge these meaningful collaborations between the machine learning experts and the animal behavior researchers,” Rutz said, “because it’s only when you put the two of us together that you stand a chance.”

    There’s no shortage of communication data to feed into data-hungry AI models, from pitch-perfect prairie dog squeaks to snails’ slimy trails. But exactly how we make use of the information we glean from these new approaches requires thorough consideration of the ethics involved in “speaking” with animals. A recent paper on the ethical concerns of using AI to communicate with whales outlined six major problem areas. These include privacy rights, cultural and emotional harm to whales, anthropomorphism, technological solutionism, gender bias, and limited effectiveness for actual whale conservation. That last issue is especially urgent, given how many whale populations are already under serious threat.

    It increasingly appears that we’re on the brink of learning much more about the ways animals interact with one another—indeed, pulling back the curtain on their communication could also yield insights into how they learn, socialize, and act within their environments. But there are still significant challenges to overcome, such as asking ourselves how we use the powerful technologies currently in development.

    Daily Newsletter

    You May Also Like

    By

    Lucas Ropek

    Published May 16, 2025

    By

    Matt Novak

    Published May 16, 2025

    By

    Isaiah Colbert

    Published May 16, 2025

    By

    Matt Novak

    Published May 15, 2025

    By

    Matt Novak

    Published May 14, 2025

    By

    Kyle Barr

    Published May 13, 2025
    #deciphering #animal #speech #should #try
    AI Is Deciphering Animal Speech. Should We Try to Talk Back?
    By Isaac Schultz Published May 17, 2025 | Comments| Scientists are using AI to decipher animal communication, creating some ethical conundrums. © Gizmodo [Illustration: St. Lumbroso, Photos: TatianaKim,Gulf MG/Shutterstock) Chirps, trills, growls, howls, squawks. Animals converse in all kinds of ways, yet humankind has only scratched the surface of how they communicate with each other and the rest of the living world. Our species has trained some animals—and if you ask cats, animals have trained us, too—but we’ve yet to truly crack the code on interspecies communication. Increasingly, animal researchers are deploying artificial intelligence to accelerate our investigations of animal communication—both within species and between branches on the tree of life. As scientists chip away at the complex communication systems of animals, they move closer to understanding what creatures are saying—and maybe even how to talk back. But as we try to bridge the linguistic gap between humans and animals, some experts are raising valid concerns about whether such capabilities are appropriate—or whether we should even attempt to communicate with animals at all. Using AI to untangle animal language Towards the front of the pack—or should I say pod?—is Project CETI, which has used machine learning to analyze more than 8,000 sperm whale “codas”—structured click patterns recorded by the Dominica Sperm Whale Project. Researchers uncovered contextual and combinatorial structures in the whales’ clicks, naming features like “rubato” and “ornamentation” to describe how whales subtly adjust their vocalizations during conversation. These patterns helped the team create a kind of phonetic alphabet for the animals—an expressive, structured system that may not be language as we know it but reveals a level of complexity that researchers weren’t previously aware of. Project CETI is also working on ethical guidelines for the technology, a critical goal given the risks of using AI to “talk” to the animals. Meanwhile, Google and the Wild Dolphin Project recently introduced DolphinGemma, a large language modeltrained on 40 years of dolphin vocalizations. Just as ChatGPT is an LLM for human inputs—taking visual information like research papers and images and producing responses to relevant queries—DolphinGemma intakes dolphin sound data and predicts what vocalization comes next. DolphinGemma can even generate dolphin-like audio, and the researchers’ prototype two-way system, Cetacean Hearing Augmentation Telemetry, uses a smartphone-based interface that dolphins employ to request items like scarves or seagrass—potentially laying the groundwork for future interspecies dialogue. “DolphinGemma is being used in the field this season to improve our real-time sound recognition in the CHAT system,” said Denise Herzing, founder and director of the Wild Dolphin Project, which spearheaded the development of DolphinGemma in collaboration with researchers at Google DeepMind, in an email to Gizmodo. “This fall we will spend time ingesting known dolphin vocalizations and let Gemma show us any repeatable patterns they find,” such as vocalizations used in courtship and mother-calf discipline. In this way, Herzing added, the AI applications are two-fold: Researchers can use it both to explore dolphins’ natural sounds and to better understand the animals’ responses to human mimicking of dolphin sounds, which are synthetically produced by the AI CHAT system. Expanding the animal AI toolkit Outside the ocean, researchers are finding that human speech models can be repurposed to decode terrestrial animal signals, too. A University of Michigan-led team used Wav2Vec2—a speech recognition model trained on human voices—to identify dogs’ emotions, genders, breeds, and even individual identities based on their barks. The pre-trained human model outperformed a version trained solely on dog data, suggesting that human language model architectures could be surprisingly effective in decoding animal communication. Of course, we need to consider the different levels of sophistication these AI models are targeting. Determining whether a dog’s bark is aggressive or playful, or whether it’s male or female—these are perhaps understandably easier for a model to determine than, say, the nuanced meaning encoded in sperm whale phonetics. Nevertheless, each study inches scientists closer to understanding how AI tools, as they currently exist, can be best applied to such an expansive field—and gives the AI a chance to train itself to become a more useful part of the researcher’s toolkit. And even cats—often seen as aloof—appear to be more communicative than they let on. In a 2022 study out of Paris Nanterre University, cats showed clear signs of recognizing their owner’s voice, but beyond that, the felines responded more intensely when spoken to directly in “cat talk.” That suggests cats not only pay attention to what we say, but also how we say it—especially when it comes from someone they know. Earlier this month, a pair of cuttlefish researchers found evidence that the animals have a set of four “waves,” or physical gestures, that they make to one another, as well as to human playback of cuttlefish waves. The group plans to apply an algorithm to categorize the types of waves, automatically track the creatures’ movements, and understand the contexts in which the animals express themselves more rapidly. Private companiesare also getting in on the act. Last week, China’s largest search engine, Baidu, filed a patent with the country’s IP administration proposing to translate animalvocalizations into human language. The quick and dirty on the tech is that it would intake a trove of data from your kitty, and then use an AI model to analyze the data, determine the animal’s emotional state, and output the apparent human language message your pet was trying to convey. A universal translator for animals? Together, these studies represent a major shift in how scientists are approaching animal communication. Rather than starting from scratch, research teams are building tools and models designed for humans—and making advances that would have taken much longer otherwise. The end goal couldbe a kind of Rosetta Stone for the animal kingdom, powered by AI. “We’ve gotten really good at analyzing human language just in the last five years, and we’re beginning to perfect this practice of transferring models trained on one dataset and applying them to new data,” said Sara Keen, a behavioral ecologist and electrical engineer at the Earth Species Project, in a video call with Gizmodo. The Earth Species Project plans to launch its flagship audio-language model for animal sounds, NatureLM, this year, and a demo for NatureLM-audio is already live. With input data from across the tree of life—as well as human speech, environmental sounds, and even music detection—the model aims to become a converter of human speech into animal analogues. The model “shows promising domain transfer from human speech to animal communication,” the project states, “supporting our hypothesis that shared representations in AI can help decode animal languages.” “A big part of our work really is trying to change the way people think about our place in the world,” Keen added. “We’re making cool discoveries about animal communication, but ultimately we’re finding that other species are just as complicated and nuanced as we are. And that revelation is pretty exciting.” The ethical dilemma Indeed, researchers generally agree on the promise of AI-based tools for improving the collection and interpretation of animal communication data. But some feel that there’s a breakdown in communication between that scholarly familiarity and the public’s perception of how these tools can be applied. “I think there’s currently a lot of misunderstanding in the coverage of this topic—that somehow machine learning can create this contextual knowledge out of nothing. That so long as you have thousands of hours of audio recordings, somehow some magic machine learning black box can squeeze meaning out of that,” said Christian Rutz, an expert in animal behavior and cognition and founding president of International Bio-Logging Society, in a video call with Gizmodo. “That’s not going to happen.” “Meaning comes through the contextual annotation and this is where I think it’s really important for this field as a whole, in this period of excitement and enthusiasm, to not forget that this annotation comes from basic behavioral ecology and natural history expertise,” Rutz added. In other words, let’s not put the horse before the cart, especially since the cart—in this case—is what’s powering the horse. But with great power…you know the cliché. Essentially, how can humans develop and apply these technologies in a way that is both scientifically illuminating and minimizes harm or disruption to its animal subjects? Experts have put forward ethical standards and guardrails for using the technologies that prioritize the welfare of creatures as we get closer to—well, wherever the technology is going. As AI advances, conversations about animal rights will have to evolve. In the future, animals could become more active participants in those conversations—a notion that legal experts are exploring as a thought exercise, but one that could someday become reality. “What we desperately need—apart from advancing the machine learning side—is to forge these meaningful collaborations between the machine learning experts and the animal behavior researchers,” Rutz said, “because it’s only when you put the two of us together that you stand a chance.” There’s no shortage of communication data to feed into data-hungry AI models, from pitch-perfect prairie dog squeaks to snails’ slimy trails. But exactly how we make use of the information we glean from these new approaches requires thorough consideration of the ethics involved in “speaking” with animals. A recent paper on the ethical concerns of using AI to communicate with whales outlined six major problem areas. These include privacy rights, cultural and emotional harm to whales, anthropomorphism, technological solutionism, gender bias, and limited effectiveness for actual whale conservation. That last issue is especially urgent, given how many whale populations are already under serious threat. It increasingly appears that we’re on the brink of learning much more about the ways animals interact with one another—indeed, pulling back the curtain on their communication could also yield insights into how they learn, socialize, and act within their environments. But there are still significant challenges to overcome, such as asking ourselves how we use the powerful technologies currently in development. Daily Newsletter You May Also Like By Lucas Ropek Published May 16, 2025 By Matt Novak Published May 16, 2025 By Isaiah Colbert Published May 16, 2025 By Matt Novak Published May 15, 2025 By Matt Novak Published May 14, 2025 By Kyle Barr Published May 13, 2025 #deciphering #animal #speech #should #try
    GIZMODO.COM
    AI Is Deciphering Animal Speech. Should We Try to Talk Back?
    By Isaac Schultz Published May 17, 2025 | Comments (0) | Scientists are using AI to decipher animal communication, creating some ethical conundrums. © Gizmodo [Illustration: St. Lumbroso, Photos: TatianaKim,Gulf MG/Shutterstock) Chirps, trills, growls, howls, squawks. Animals converse in all kinds of ways, yet humankind has only scratched the surface of how they communicate with each other and the rest of the living world. Our species has trained some animals—and if you ask cats, animals have trained us, too—but we’ve yet to truly crack the code on interspecies communication. Increasingly, animal researchers are deploying artificial intelligence to accelerate our investigations of animal communication—both within species and between branches on the tree of life. As scientists chip away at the complex communication systems of animals, they move closer to understanding what creatures are saying—and maybe even how to talk back. But as we try to bridge the linguistic gap between humans and animals, some experts are raising valid concerns about whether such capabilities are appropriate—or whether we should even attempt to communicate with animals at all. Using AI to untangle animal language Towards the front of the pack—or should I say pod?—is Project CETI, which has used machine learning to analyze more than 8,000 sperm whale “codas”—structured click patterns recorded by the Dominica Sperm Whale Project. Researchers uncovered contextual and combinatorial structures in the whales’ clicks, naming features like “rubato” and “ornamentation” to describe how whales subtly adjust their vocalizations during conversation. These patterns helped the team create a kind of phonetic alphabet for the animals—an expressive, structured system that may not be language as we know it but reveals a level of complexity that researchers weren’t previously aware of. Project CETI is also working on ethical guidelines for the technology, a critical goal given the risks of using AI to “talk” to the animals. Meanwhile, Google and the Wild Dolphin Project recently introduced DolphinGemma, a large language model (LLM) trained on 40 years of dolphin vocalizations. Just as ChatGPT is an LLM for human inputs—taking visual information like research papers and images and producing responses to relevant queries—DolphinGemma intakes dolphin sound data and predicts what vocalization comes next. DolphinGemma can even generate dolphin-like audio, and the researchers’ prototype two-way system, Cetacean Hearing Augmentation Telemetry (fittingly, CHAT), uses a smartphone-based interface that dolphins employ to request items like scarves or seagrass—potentially laying the groundwork for future interspecies dialogue. “DolphinGemma is being used in the field this season to improve our real-time sound recognition in the CHAT system,” said Denise Herzing, founder and director of the Wild Dolphin Project, which spearheaded the development of DolphinGemma in collaboration with researchers at Google DeepMind, in an email to Gizmodo. “This fall we will spend time ingesting known dolphin vocalizations and let Gemma show us any repeatable patterns they find,” such as vocalizations used in courtship and mother-calf discipline. In this way, Herzing added, the AI applications are two-fold: Researchers can use it both to explore dolphins’ natural sounds and to better understand the animals’ responses to human mimicking of dolphin sounds, which are synthetically produced by the AI CHAT system. Expanding the animal AI toolkit Outside the ocean, researchers are finding that human speech models can be repurposed to decode terrestrial animal signals, too. A University of Michigan-led team used Wav2Vec2—a speech recognition model trained on human voices—to identify dogs’ emotions, genders, breeds, and even individual identities based on their barks. The pre-trained human model outperformed a version trained solely on dog data, suggesting that human language model architectures could be surprisingly effective in decoding animal communication. Of course, we need to consider the different levels of sophistication these AI models are targeting. Determining whether a dog’s bark is aggressive or playful, or whether it’s male or female—these are perhaps understandably easier for a model to determine than, say, the nuanced meaning encoded in sperm whale phonetics. Nevertheless, each study inches scientists closer to understanding how AI tools, as they currently exist, can be best applied to such an expansive field—and gives the AI a chance to train itself to become a more useful part of the researcher’s toolkit. And even cats—often seen as aloof—appear to be more communicative than they let on. In a 2022 study out of Paris Nanterre University, cats showed clear signs of recognizing their owner’s voice, but beyond that, the felines responded more intensely when spoken to directly in “cat talk.” That suggests cats not only pay attention to what we say, but also how we say it—especially when it comes from someone they know. Earlier this month, a pair of cuttlefish researchers found evidence that the animals have a set of four “waves,” or physical gestures, that they make to one another, as well as to human playback of cuttlefish waves. The group plans to apply an algorithm to categorize the types of waves, automatically track the creatures’ movements, and understand the contexts in which the animals express themselves more rapidly. Private companies (such as Google) are also getting in on the act. Last week, China’s largest search engine, Baidu, filed a patent with the country’s IP administration proposing to translate animal (specifically cat) vocalizations into human language. The quick and dirty on the tech is that it would intake a trove of data from your kitty, and then use an AI model to analyze the data, determine the animal’s emotional state, and output the apparent human language message your pet was trying to convey. A universal translator for animals? Together, these studies represent a major shift in how scientists are approaching animal communication. Rather than starting from scratch, research teams are building tools and models designed for humans—and making advances that would have taken much longer otherwise. The end goal could (read: could) be a kind of Rosetta Stone for the animal kingdom, powered by AI. “We’ve gotten really good at analyzing human language just in the last five years, and we’re beginning to perfect this practice of transferring models trained on one dataset and applying them to new data,” said Sara Keen, a behavioral ecologist and electrical engineer at the Earth Species Project, in a video call with Gizmodo. The Earth Species Project plans to launch its flagship audio-language model for animal sounds, NatureLM, this year, and a demo for NatureLM-audio is already live. With input data from across the tree of life—as well as human speech, environmental sounds, and even music detection—the model aims to become a converter of human speech into animal analogues. The model “shows promising domain transfer from human speech to animal communication,” the project states, “supporting our hypothesis that shared representations in AI can help decode animal languages.” “A big part of our work really is trying to change the way people think about our place in the world,” Keen added. “We’re making cool discoveries about animal communication, but ultimately we’re finding that other species are just as complicated and nuanced as we are. And that revelation is pretty exciting.” The ethical dilemma Indeed, researchers generally agree on the promise of AI-based tools for improving the collection and interpretation of animal communication data. But some feel that there’s a breakdown in communication between that scholarly familiarity and the public’s perception of how these tools can be applied. “I think there’s currently a lot of misunderstanding in the coverage of this topic—that somehow machine learning can create this contextual knowledge out of nothing. That so long as you have thousands of hours of audio recordings, somehow some magic machine learning black box can squeeze meaning out of that,” said Christian Rutz, an expert in animal behavior and cognition and founding president of International Bio-Logging Society, in a video call with Gizmodo. “That’s not going to happen.” “Meaning comes through the contextual annotation and this is where I think it’s really important for this field as a whole, in this period of excitement and enthusiasm, to not forget that this annotation comes from basic behavioral ecology and natural history expertise,” Rutz added. In other words, let’s not put the horse before the cart, especially since the cart—in this case—is what’s powering the horse. But with great power…you know the cliché. Essentially, how can humans develop and apply these technologies in a way that is both scientifically illuminating and minimizes harm or disruption to its animal subjects? Experts have put forward ethical standards and guardrails for using the technologies that prioritize the welfare of creatures as we get closer to—well, wherever the technology is going. As AI advances, conversations about animal rights will have to evolve. In the future, animals could become more active participants in those conversations—a notion that legal experts are exploring as a thought exercise, but one that could someday become reality. “What we desperately need—apart from advancing the machine learning side—is to forge these meaningful collaborations between the machine learning experts and the animal behavior researchers,” Rutz said, “because it’s only when you put the two of us together that you stand a chance.” There’s no shortage of communication data to feed into data-hungry AI models, from pitch-perfect prairie dog squeaks to snails’ slimy trails (yes, really). But exactly how we make use of the information we glean from these new approaches requires thorough consideration of the ethics involved in “speaking” with animals. A recent paper on the ethical concerns of using AI to communicate with whales outlined six major problem areas. These include privacy rights, cultural and emotional harm to whales, anthropomorphism, technological solutionism (an overreliance on technology to fix problems), gender bias, and limited effectiveness for actual whale conservation. That last issue is especially urgent, given how many whale populations are already under serious threat. It increasingly appears that we’re on the brink of learning much more about the ways animals interact with one another—indeed, pulling back the curtain on their communication could also yield insights into how they learn, socialize, and act within their environments. But there are still significant challenges to overcome, such as asking ourselves how we use the powerful technologies currently in development. Daily Newsletter You May Also Like By Lucas Ropek Published May 16, 2025 By Matt Novak Published May 16, 2025 By Isaiah Colbert Published May 16, 2025 By Matt Novak Published May 15, 2025 By Matt Novak Published May 14, 2025 By Kyle Barr Published May 13, 2025
    4 Comentários 0 Compartilhamentos
  • These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time

    Cool Finds

    These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time
    Mount Vesuvius’ eruption preserved the Herculaneum scrolls beneath a blanket of ash. Two millennia later, X-ray scans show that one of them is a philosophical text called “On Vice”

    The scroll previously known only as PHerc. 172 was written by the Epicurean philosopher Philodemus.
    Vesuvius Challenge / Bodleian Libraries, Oxford University

    In the 1750s, an Italian farmer digging a well stumbled upon a lavish villa in the ruins of Herculaneum. Inside was a sprawling library with hundreds of scrolls, untouched since Mount Vesuvius’ eruption in 79 C.E. Some of them were still neatly tucked away on the shelves.
    This staggering discovery was the only complete library from antiquity ever found. But when 18th-century scholars tried to unroll the charred papyrus, the scrolls crumbled to pieces. They became resigned to the fact that the text hidden inside wouldn’t be revealed during their lifetimes.
    In recent years, however, researchers realized that they were living in the generation that would finally solve the puzzle. Using artificial intelligence, they’ve developed methods to peer inside the Herculaneum scrolls without damaging them, revealing short passages of ancient text.
    This month, researchers announced a new breakthrough. While analyzing a scroll known as PHerc. 172, they determined its title: On Vices. Based on other works, they think the full title is On Vices and Their Opposite Virtues and in Whom They Are and About What.

    The scan revealed letters spelling out the scroll's title.

    Vesuvius Challenge

    “We are thrilled to share that the written title of this scroll has been recovered from deep inside its carbonized folds of papyrus,” the Vesuvius Challenge, which is leading efforts to decipher the scrolls, says in a statement. “This is the first time the title of a still-rolled Herculaneum scroll has ever been recovered noninvasively.”
    On Vices was written by Philodemus, a Greek philosopher who lived in Herculaneum more than a century before Vesuvius’ eruption. Born around 110 B.C.E., Philodemus studied at a school in Athens founded several centuries earlier by the influential philosopher Epicurus, who believed in achieving happiness by pursuing certain specific forms of pleasure.
    “This will be a great opportunity to learn more about Philodemus’ ethical views and to get a better view of the On Vices as a whole,” Michael McOsker, a papyrologist at University College London who is working with the Vesuvius Challenge, tells CNN’s Catherine Nicholls.
    When it launched in 2023, the Vesuvius Challenge offered more than million in prize money to citizen scientists around the world who could use A.I. to help decipher scans of the Herculaneum scrolls. 

    Spearheaded by Brent Seales, a computer scientist at the University of Kentucky, the team scanned several of the scrolls and uploaded the data for anyone to use. To earn the prize money, participants competed to be the first to reach a series of milestones.
    Reading the papyrus involves solving several difficult problems. After the rolled-up scrolls are scanned, their many layers need to be separated out and flattened into two-dimensional segments. At that point, the carbon-based ink usually isn’t visible in the scans, so machine-learning models are necessary to identify the inked sections.
    In late 2023, a computer science student revealed the first word on an unopened scroll: “porphyras,” an ancient Greek term for “purple.” Months later, participants worked out 2,000 characters of text, which discussed pleasures such as music and food.

    5 Surprising Facts About Pompeii
    Watch on

    But PHerc. 172 is different from these earlier scrolls. When researchers scanned it last summer, they realized that some of the ink was visible in the images. They aren’t sure why this scroll is so much more legible, though they hypothesize it’s because the ink contains a denser contaminant such as lead, according to the University of Oxford’s Bodleian Libraries, which houses the scroll.
    In early May, the Vesuvius Challenge announced that contestants Marcel Roth and Micha Nowak, computer scientists at Germany’s University of Würzburg, would receive for deciphering the title. Sean Johnson, a researcher with the Vesuvius Challenge, had independently identified the title around the same time.
    Researchers are anticipating many more breakthroughs on the horizon. In the past three months alone, they’ve already scanned dozens of new scrolls.
    “The pace is ramping up very quickly,” McOsker tells the Guardian’s Ian Sample. “All of the technological progress that’s been made on this has been in the last three to five years—and on the timescales of classicists, that’s unbelievable.”

    Get the latest stories in your inbox every weekday.
    #these #ancient #scrolls #have #been
    These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time
    Cool Finds These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time Mount Vesuvius’ eruption preserved the Herculaneum scrolls beneath a blanket of ash. Two millennia later, X-ray scans show that one of them is a philosophical text called “On Vice” The scroll previously known only as PHerc. 172 was written by the Epicurean philosopher Philodemus. Vesuvius Challenge / Bodleian Libraries, Oxford University In the 1750s, an Italian farmer digging a well stumbled upon a lavish villa in the ruins of Herculaneum. Inside was a sprawling library with hundreds of scrolls, untouched since Mount Vesuvius’ eruption in 79 C.E. Some of them were still neatly tucked away on the shelves. This staggering discovery was the only complete library from antiquity ever found. But when 18th-century scholars tried to unroll the charred papyrus, the scrolls crumbled to pieces. They became resigned to the fact that the text hidden inside wouldn’t be revealed during their lifetimes. In recent years, however, researchers realized that they were living in the generation that would finally solve the puzzle. Using artificial intelligence, they’ve developed methods to peer inside the Herculaneum scrolls without damaging them, revealing short passages of ancient text. This month, researchers announced a new breakthrough. While analyzing a scroll known as PHerc. 172, they determined its title: On Vices. Based on other works, they think the full title is On Vices and Their Opposite Virtues and in Whom They Are and About What. The scan revealed letters spelling out the scroll's title. Vesuvius Challenge “We are thrilled to share that the written title of this scroll has been recovered from deep inside its carbonized folds of papyrus,” the Vesuvius Challenge, which is leading efforts to decipher the scrolls, says in a statement. “This is the first time the title of a still-rolled Herculaneum scroll has ever been recovered noninvasively.” On Vices was written by Philodemus, a Greek philosopher who lived in Herculaneum more than a century before Vesuvius’ eruption. Born around 110 B.C.E., Philodemus studied at a school in Athens founded several centuries earlier by the influential philosopher Epicurus, who believed in achieving happiness by pursuing certain specific forms of pleasure. “This will be a great opportunity to learn more about Philodemus’ ethical views and to get a better view of the On Vices as a whole,” Michael McOsker, a papyrologist at University College London who is working with the Vesuvius Challenge, tells CNN’s Catherine Nicholls. When it launched in 2023, the Vesuvius Challenge offered more than million in prize money to citizen scientists around the world who could use A.I. to help decipher scans of the Herculaneum scrolls.  Spearheaded by Brent Seales, a computer scientist at the University of Kentucky, the team scanned several of the scrolls and uploaded the data for anyone to use. To earn the prize money, participants competed to be the first to reach a series of milestones. Reading the papyrus involves solving several difficult problems. After the rolled-up scrolls are scanned, their many layers need to be separated out and flattened into two-dimensional segments. At that point, the carbon-based ink usually isn’t visible in the scans, so machine-learning models are necessary to identify the inked sections. In late 2023, a computer science student revealed the first word on an unopened scroll: “porphyras,” an ancient Greek term for “purple.” Months later, participants worked out 2,000 characters of text, which discussed pleasures such as music and food. 5 Surprising Facts About Pompeii Watch on But PHerc. 172 is different from these earlier scrolls. When researchers scanned it last summer, they realized that some of the ink was visible in the images. They aren’t sure why this scroll is so much more legible, though they hypothesize it’s because the ink contains a denser contaminant such as lead, according to the University of Oxford’s Bodleian Libraries, which houses the scroll. In early May, the Vesuvius Challenge announced that contestants Marcel Roth and Micha Nowak, computer scientists at Germany’s University of Würzburg, would receive for deciphering the title. Sean Johnson, a researcher with the Vesuvius Challenge, had independently identified the title around the same time. Researchers are anticipating many more breakthroughs on the horizon. In the past three months alone, they’ve already scanned dozens of new scrolls. “The pace is ramping up very quickly,” McOsker tells the Guardian’s Ian Sample. “All of the technological progress that’s been made on this has been in the last three to five years—and on the timescales of classicists, that’s unbelievable.” Get the latest stories in your inbox every weekday. #these #ancient #scrolls #have #been
    WWW.SMITHSONIANMAG.COM
    These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time
    Cool Finds These Ancient Scrolls Have Been a Tantalizing Mystery for 2,000 Years. Researchers Just Deciphered a Title for the First Time Mount Vesuvius’ eruption preserved the Herculaneum scrolls beneath a blanket of ash. Two millennia later, X-ray scans show that one of them is a philosophical text called “On Vice” The scroll previously known only as PHerc. 172 was written by the Epicurean philosopher Philodemus. Vesuvius Challenge / Bodleian Libraries, Oxford University In the 1750s, an Italian farmer digging a well stumbled upon a lavish villa in the ruins of Herculaneum. Inside was a sprawling library with hundreds of scrolls, untouched since Mount Vesuvius’ eruption in 79 C.E. Some of them were still neatly tucked away on the shelves. This staggering discovery was the only complete library from antiquity ever found. But when 18th-century scholars tried to unroll the charred papyrus, the scrolls crumbled to pieces. They became resigned to the fact that the text hidden inside wouldn’t be revealed during their lifetimes. In recent years, however, researchers realized that they were living in the generation that would finally solve the puzzle. Using artificial intelligence, they’ve developed methods to peer inside the Herculaneum scrolls without damaging them, revealing short passages of ancient text. This month, researchers announced a new breakthrough. While analyzing a scroll known as PHerc. 172, they determined its title: On Vices. Based on other works, they think the full title is On Vices and Their Opposite Virtues and in Whom They Are and About What. The scan revealed letters spelling out the scroll's title. Vesuvius Challenge “We are thrilled to share that the written title of this scroll has been recovered from deep inside its carbonized folds of papyrus,” the Vesuvius Challenge, which is leading efforts to decipher the scrolls, says in a statement. “This is the first time the title of a still-rolled Herculaneum scroll has ever been recovered noninvasively.” On Vices was written by Philodemus, a Greek philosopher who lived in Herculaneum more than a century before Vesuvius’ eruption. Born around 110 B.C.E., Philodemus studied at a school in Athens founded several centuries earlier by the influential philosopher Epicurus, who believed in achieving happiness by pursuing certain specific forms of pleasure. “This will be a great opportunity to learn more about Philodemus’ ethical views and to get a better view of the On Vices as a whole,” Michael McOsker, a papyrologist at University College London who is working with the Vesuvius Challenge, tells CNN’s Catherine Nicholls. When it launched in 2023, the Vesuvius Challenge offered more than $1 million in prize money to citizen scientists around the world who could use A.I. to help decipher scans of the Herculaneum scrolls.  Spearheaded by Brent Seales, a computer scientist at the University of Kentucky, the team scanned several of the scrolls and uploaded the data for anyone to use. To earn the prize money, participants competed to be the first to reach a series of milestones. Reading the papyrus involves solving several difficult problems. After the rolled-up scrolls are scanned, their many layers need to be separated out and flattened into two-dimensional segments. At that point, the carbon-based ink usually isn’t visible in the scans, so machine-learning models are necessary to identify the inked sections. In late 2023, a computer science student revealed the first word on an unopened scroll: “porphyras,” an ancient Greek term for “purple.” Months later, participants worked out 2,000 characters of text, which discussed pleasures such as music and food. 5 Surprising Facts About Pompeii Watch on But PHerc. 172 is different from these earlier scrolls. When researchers scanned it last summer, they realized that some of the ink was visible in the images. They aren’t sure why this scroll is so much more legible, though they hypothesize it’s because the ink contains a denser contaminant such as lead, according to the University of Oxford’s Bodleian Libraries, which houses the scroll. In early May, the Vesuvius Challenge announced that contestants Marcel Roth and Micha Nowak, computer scientists at Germany’s University of Würzburg, would receive $60,000 for deciphering the title. Sean Johnson, a researcher with the Vesuvius Challenge, had independently identified the title around the same time. Researchers are anticipating many more breakthroughs on the horizon. In the past three months alone, they’ve already scanned dozens of new scrolls. “The pace is ramping up very quickly,” McOsker tells the Guardian’s Ian Sample. “All of the technological progress that’s been made on this has been in the last three to five years—and on the timescales of classicists, that’s unbelievable.” Get the latest stories in your inbox every weekday.
    0 Comentários 0 Compartilhamentos