• Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety

    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.
    Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing.
    These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation.
    To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools.
    Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale.
    Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale.
    NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale.
    Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models.

    Foundations for Scalable, Realistic Simulation
    Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots.

    In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools.
    Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos.
    Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing.
    The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases.
    Driving the Future of AV Safety
    To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety.
    The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems.
    These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks.

    At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance.
    Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay:

    Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks.
    Get Plugged Into the World of OpenUSD
    Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote.
    Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14.
    Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute.
    Explore the Alliance for OpenUSD forum and the AOUSD website.
    Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    #into #omniverse #world #foundation #models
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehiclesacross countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models— neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description, a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X. #into #omniverse #world #foundation #models
    BLOGS.NVIDIA.COM
    Into the Omniverse: World Foundation Models Advance Autonomous Vehicle Simulation and Safety
    Editor’s note: This blog is a part of Into the Omniverse, a series focused on how developers, 3D practitioners and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse. Simulated driving environments enable engineers to safely and efficiently train, test and validate autonomous vehicles (AVs) across countless real-world and edge-case scenarios without the risks and costs of physical testing. These simulated environments can be created through neural reconstruction of real-world data from AV fleets or generated with world foundation models (WFMs) — neural networks that understand physics and real-world properties. WFMs can be used to generate synthetic datasets for enhanced AV simulation. To help physical AI developers build such simulated environments, NVIDIA unveiled major advances in WFMs at the GTC Paris and CVPR conferences earlier this month. These new capabilities enhance NVIDIA Cosmos — a platform of generative WFMs, advanced tokenizers, guardrails and accelerated data processing tools. Key innovations like Cosmos Predict-2, the Cosmos Transfer-1 NVIDIA preview NIM microservice and Cosmos Reason are improving how AV developers generate synthetic data, build realistic simulated environments and validate safety systems at unprecedented scale. Universal Scene Description (OpenUSD), a unified data framework and standard for physical AI applications, enables seamless integration and interoperability of simulation assets across the development pipeline. OpenUSD standardization plays a critical role in ensuring 3D pipelines are built to scale. NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services for building OpenUSD-based physical AI applications, enables simulations from WFMs and neural reconstruction at world scale. Leading AV organizations — including Foretellix, Mcity, Oxa, Parallel Domain, Plus AI and Uber — are among the first to adopt Cosmos models. Foundations for Scalable, Realistic Simulation Cosmos Predict-2, NVIDIA’s latest WFM, generates high-quality synthetic data by predicting future world states from multimodal inputs like text, images and video. This capability is critical for creating temporally consistent, realistic scenarios that accelerate training and validation of AVs and robots. In addition, Cosmos Transfer, a control model that adds variations in weather, lighting and terrain to existing scenarios, will soon be available to 150,000 developers on CARLA, a leading open-source AV simulator. This greatly expands the broad AV developer community’s access to advanced AI-powered simulation tools. Developers can start integrating synthetic data into their own pipelines using the NVIDIA Physical AI Dataset. The latest release includes 40,000 clips generated using Cosmos. Building on these foundations, the Omniverse Blueprint for AV simulation provides a standardized, API-driven workflow for constructing rich digital twins, replaying real-world sensor data and generating new ground-truth data for closed-loop testing. The blueprint taps into OpenUSD’s layer-stacking and composition arcs, which enable developers to collaborate asynchronously and modify scenes nondestructively. This helps create modular, reusable scenario variants to efficiently generate different weather conditions, traffic patterns and edge cases. Driving the Future of AV Safety To bolster the operational safety of AV systems, NVIDIA earlier this year introduced NVIDIA Halos — a comprehensive safety platform that integrates the company’s full automotive hardware and software stack with AI research focused on AV safety. The new Cosmos models — Cosmos Predict- 2, Cosmos Transfer- 1 NIM and Cosmos Reason — deliver further safety enhancements to the Halos platform, enabling developers to create diverse, controllable and realistic scenarios for training and validating AV systems. These models, trained on massive multimodal datasets including driving data, amplify the breadth and depth of simulation, allowing for robust scenario coverage — including rare and safety-critical events — while supporting post-training customization for specialized AV tasks. At CVPR, NVIDIA was recognized as an Autonomous Grand Challenge winner, highlighting its leadership in advancing end-to-end AV workflows. The challenge used OpenUSD’s robust metadata and interoperability to simulate sensor inputs and vehicle trajectories in semi-reactive environments, achieving state-of-the-art results in safety and compliance. Learn more about how developers are leveraging tools like CARLA, Cosmos, and Omniverse to advance AV simulation in this livestream replay: Hear NVIDIA Director of Autonomous Vehicle Research Marco Pavone on the NVIDIA AI Podcast share how digital twins and high-fidelity simulation are improving vehicle testing, accelerating development and reducing real-world risks. Get Plugged Into the World of OpenUSD Learn more about what’s next for AV simulation with OpenUSD by watching the replay of NVIDIA founder and CEO Jensen Huang’s GTC Paris keynote. Looking for more live opportunities to learn more about OpenUSD? Don’t miss sessions and labs happening at SIGGRAPH 2025, August 10–14. Discover why developers and 3D practitioners are using OpenUSD and learn how to optimize 3D workflows with the self-paced “Learn OpenUSD” curriculum for 3D developers and practitioners, available for free through the NVIDIA Deep Learning Institute. Explore the Alliance for OpenUSD forum and the AOUSD website. Stay up to date by subscribing to NVIDIA Omniverse news, joining the community and following NVIDIA Omniverse on Instagram, LinkedIn, Medium and X.
    0 Comments 0 Shares
  • Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon

    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations.
    Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered.
    Frontiers: What inspired you to become a researcher?
    Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved.
    F: Can you tell us about the research you’re currently working on?
    BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation.
    Local boating the Amazon River. CREDIT: Beatriz Cosendey.
    F: Could you tell us about one of the legends surrounding anacondas?
    BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty.
    F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity?
    BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently.
    A giant anaconda is being measured. Credit: Pedro Calazans.
    F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play?
    BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?”
    For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste.
    One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey.
    Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey.
    We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals.
    F: Are there any common misconceptions about this area of research? How would you address them?
    BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data.
    However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework.
    To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society.
    The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey.
    F: What are some of the areas of research you’d like to see tackled in the years ahead?
    BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere.
    F: How has open science benefited the reach and impact of your research?
    BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups.
    The Q&A can also be read here.
    #qampampa #how #anacondas #chickens #locals
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here. #qampampa #how #anacondas #chickens #locals
    WWW.POPSCI.COM
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals (up to around 2–2.5 meters), while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is her [the anaconda’s] favorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh (to block smaller animals), and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here.
    Like
    Love
    Wow
    Sad
    Angry
    443
    2 Comments 0 Shares
  • Anker’s Soundcore Sleep earbuds finally feature active noise canceling

    Anker has announced a new version of its wireless sleep buds that could be even more effective at delivering a peaceful slumber by blocking out disturbing noises using active noise cancellation. Previous versions of the Soundcore Sleep earbuds blocked external sounds passively using just a snug fit inside the ear, but the new Sleep A30 finally add ANC while still offering enough battery life to last the night.As with previous versions, Anker is making its new Soundcore Sleep A30 available for preorder through a Kickstarter crowdfunding campaign that’s launching today, while full availability of the earbuds is expected sometime in August 2025 through Amazon and Soundcore’s online store. At the Sleep A30 are quite a bit more expensive than last year’s Sleep A20, but the earliest Kickstarter backers can get the A30 discounted to The Sleep A30 are slimmer and smaller than previous versions, potentially making them more comfortable to wear overnight. Image: AnkerThe Sleep A30 earbuds are now 7 percent slimmer and feature a smaller design that ensures they don’t protrude from your ears so there’s reduced pressure while wearing them and laying on a pillow if you’re a side sleeper. To help you find a snug fit, Anker includes four sizes of silicone ear tips, three sizes of memory foam tips, and three sizes of ear wings.Anker claims the new Sleep A30 block up to 30dB of external noise, but the added ANC, which uses two mics positioned inside and outside your ears, does result in reduced battery life. The A20 could run for up to 14 hours on a single charge, but the A30 max out at up to nine hours on their own, or up to 45 hours with their charging case. However, that’s only when listening to white noise or other sounds designed to help you fall asleep that are stored on the buds themselves. When streaming music or podcasts from a phone, battery life is further reduced to up to 6.5 hours or 35 hours with the case.The Sleep A30’s charging case has been upgraded to detect snoring sounds and generate audio to mask them. Image: AnkerThe Sleep A30’s charging case has been upgraded with what Anker is calling “Adaptive Snore Masking technology.” If it detects the sounds of snoring from another person nearby, it analyzes the volume and frequency of the sounds and generates “noise masking audio” that’s sent to the buds to help block it out.The new earbuds also feature sleep monitoring and sleep position tracking, allowing you to see how restful or eventful your night was through the Soundcore mobile app; a private repeatable alarm with snooze functionality; and a Find My Earbud feature should they fall out in the night and get lost in the sheets.See More:
    #ankers #soundcore #sleep #earbuds #finally
    Anker’s Soundcore Sleep earbuds finally feature active noise canceling
    Anker has announced a new version of its wireless sleep buds that could be even more effective at delivering a peaceful slumber by blocking out disturbing noises using active noise cancellation. Previous versions of the Soundcore Sleep earbuds blocked external sounds passively using just a snug fit inside the ear, but the new Sleep A30 finally add ANC while still offering enough battery life to last the night.As with previous versions, Anker is making its new Soundcore Sleep A30 available for preorder through a Kickstarter crowdfunding campaign that’s launching today, while full availability of the earbuds is expected sometime in August 2025 through Amazon and Soundcore’s online store. At the Sleep A30 are quite a bit more expensive than last year’s Sleep A20, but the earliest Kickstarter backers can get the A30 discounted to The Sleep A30 are slimmer and smaller than previous versions, potentially making them more comfortable to wear overnight. Image: AnkerThe Sleep A30 earbuds are now 7 percent slimmer and feature a smaller design that ensures they don’t protrude from your ears so there’s reduced pressure while wearing them and laying on a pillow if you’re a side sleeper. To help you find a snug fit, Anker includes four sizes of silicone ear tips, three sizes of memory foam tips, and three sizes of ear wings.Anker claims the new Sleep A30 block up to 30dB of external noise, but the added ANC, which uses two mics positioned inside and outside your ears, does result in reduced battery life. The A20 could run for up to 14 hours on a single charge, but the A30 max out at up to nine hours on their own, or up to 45 hours with their charging case. However, that’s only when listening to white noise or other sounds designed to help you fall asleep that are stored on the buds themselves. When streaming music or podcasts from a phone, battery life is further reduced to up to 6.5 hours or 35 hours with the case.The Sleep A30’s charging case has been upgraded to detect snoring sounds and generate audio to mask them. Image: AnkerThe Sleep A30’s charging case has been upgraded with what Anker is calling “Adaptive Snore Masking technology.” If it detects the sounds of snoring from another person nearby, it analyzes the volume and frequency of the sounds and generates “noise masking audio” that’s sent to the buds to help block it out.The new earbuds also feature sleep monitoring and sleep position tracking, allowing you to see how restful or eventful your night was through the Soundcore mobile app; a private repeatable alarm with snooze functionality; and a Find My Earbud feature should they fall out in the night and get lost in the sheets.See More: #ankers #soundcore #sleep #earbuds #finally
    WWW.THEVERGE.COM
    Anker’s Soundcore Sleep earbuds finally feature active noise canceling
    Anker has announced a new version of its wireless sleep buds that could be even more effective at delivering a peaceful slumber by blocking out disturbing noises using active noise cancellation. Previous versions of the Soundcore Sleep earbuds blocked external sounds passively using just a snug fit inside the ear, but the new Sleep A30 finally add ANC while still offering enough battery life to last the night.As with previous versions, Anker is making its new Soundcore Sleep A30 available for preorder through a Kickstarter crowdfunding campaign that’s launching today, while full availability of the earbuds is expected sometime in August 2025 through Amazon and Soundcore’s online store. At $229.99, the Sleep A30 are quite a bit more expensive than last year’s $149.99 Sleep A20, but the earliest Kickstarter backers can get the A30 discounted to $139.The Sleep A30 are slimmer and smaller than previous versions, potentially making them more comfortable to wear overnight. Image: AnkerThe Sleep A30 earbuds are now 7 percent slimmer and feature a smaller design that ensures they don’t protrude from your ears so there’s reduced pressure while wearing them and laying on a pillow if you’re a side sleeper. To help you find a snug fit, Anker includes four sizes of silicone ear tips, three sizes of memory foam tips, and three sizes of ear wings.Anker claims the new Sleep A30 block up to 30dB of external noise, but the added ANC, which uses two mics positioned inside and outside your ears, does result in reduced battery life. The A20 could run for up to 14 hours on a single charge, but the A30 max out at up to nine hours on their own, or up to 45 hours with their charging case. However, that’s only when listening to white noise or other sounds designed to help you fall asleep that are stored on the buds themselves. When streaming music or podcasts from a phone, battery life is further reduced to up to 6.5 hours or 35 hours with the case.The Sleep A30’s charging case has been upgraded to detect snoring sounds and generate audio to mask them. Image: AnkerThe Sleep A30’s charging case has been upgraded with what Anker is calling “Adaptive Snore Masking technology.” If it detects the sounds of snoring from another person nearby, it analyzes the volume and frequency of the sounds and generates “noise masking audio” that’s sent to the buds to help block it out.The new earbuds also feature sleep monitoring and sleep position tracking, allowing you to see how restful or eventful your night was through the Soundcore mobile app; a private repeatable alarm with snooze functionality; and a Find My Earbud feature should they fall out in the night and get lost in the sheets.See More:
    Like
    Love
    Wow
    Sad
    Angry
    350
    0 Comments 0 Shares
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Comments 0 Shares
  • The 3 most important KPIs running an on-device acquisition campaign

    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per actionIf it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained userIt’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV.
    #most #important #kpis #running #ondevice
    The 3 most important KPIs running an on-device acquisition campaign
    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per actionIf it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained userIt’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV. #most #important #kpis #running #ondevice
    UNITY.COM
    The 3 most important KPIs running an on-device acquisition campaign
    On-device channels are no longer all about preloads. Today, telcos represent another performance marketing channel with transparent reporting and deeper insights. To get the full picture behind the performance of your on-device campaigns, it’s critical to prioritize long-term KPIs. It’s the only way the stickiness of users acquired through these channels really shine. Why?On-device campaigns reach users when they’re setting up their new devices and looking to download apps they’ll use throughout the device lifetime, not necessarily right away. Think about it - if you download a booking app from an ad during device setup, are you planning to book a vacation immediately or later down the road?This means attribution is a waiting game for on-device campaigns, with day 30 as the turning point. In fact, if a user engages with your app 30 days down the line, they’re more likely to stay active for a long period of time. Simply put, LTV is high for on-device campaigns. This means you want to be looking at KPIs that allow you to measure and optimize the value of the users you attract far down the road.ROASROAS is king when it comes to measuring the long-term value of your users. To get the clearest idea of your ROAS and how to optimize it, there are a few things to keep in mind. First, ROAS should be measured on D30/60/90 not D1/3/7. This is because, with on-device channels, users are likely to open an app within the first 30 days or longer - when a user downloads an app during device setup, they do so expecting to open it in the future, not right away.You should also pay attention to how it’s being measured. ROAS is calculated by dividing the amount of revenue a campaign generates by the amount it costs to run it. In the context of on-device campaigns, that revenue comes from in-app purchases, subscriptions, or ad monetization.When measuring the effectiveness of your on-device campaigns, it’s important to calculate ROAS using your on-device ad revenue rather than average ad revenue, which will be lower. That’s because ad revenue is high for users acquired through on-device campaigns - on-device channels use unique data points and deep algorithms to ensure the right bid for each individual user. To get the clearest picture of where you stand in relation to your ROAS goals, you should integrate ad revenue with your on-device platform.Once calculated, ROAS gives a clear monetary view of your campaigns, so it’s clear how much you spent vs brought in. This monetary value is important because it tells you if your on-device campaigns are reaching valuable users. Looking at ROAS by placements, you get insight into which placements are doing it best. With the knowledge of how to maximize ROAS, you’ll maximize the long term value and engagement of your users, too.Cost KPIsComparing LTV to spend will help you determine whether or not your users are spending enough to cover your spend and ultimately turn a profit. You can even pinpoint areas of your strategy that are effective, and those that may need adjustment.There are a few ways to measure cost effectiveness. Here are the most common two, especially for on-device campaigns.Cost per action (CPA)If it’s quality you’re looking for, first, run a CPA campaign to confirm that you’re looking in the right places for users who will engage with your app. To count as a conversion, users must see the ad, install the app, and complete the action you preset. You’ll only pay for the users who reach a chosen point in the app experience after installation. A CPA that is higher than LTV is a clear indicator that your campaigns are focused on less relevant channels or touchpoints, while a CPA that is lower than your LTV confirms that you are attracting high quality users.In the context of on-device campaigns, this is key because it means you won't pay immediately for a user who may not engage for a month or so. The pricing model also integrates in-app revenue, which is useful for apps that rely more on IAPs than ads.Cost per retained user (CPRU)It’s also worthwhile to keep track of how much you’re paying for the user that’s still there on day 30. CPRU takes into account conversions and retention rate - if your budget is $10k, you have 1000 conversions and a day 1 retention rate of 20%, you come away with 200 converted users at a $50 per user acquisition cost. If you can increase retention, you end up with higher quality users at a lower CPRU.Measuring CPRU, retention becomes a success metric for your UA campaigns and can help you determine whether you have enough engaged users to cover spend.On day 30 and beyond, these KPIs can help you optimize your on-device campaigns to reach the most engaged users with high LTV.
    Like
    Love
    Wow
    Angry
    Sad
    637
    0 Comments 0 Shares
  • Autodesk adds AI animation tool MotionMaker to Maya 2026.1

    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" ";

    A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations.

    Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work.
    Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows.
    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios.

    MotionMaker: new generative AI tool roughs out movement animations

    The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation.
    Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between.
    At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting.
    Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation.
    Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”.
    Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools.
    Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example.
    There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points.
    There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated.
    You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog.
    According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”.
    That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future.

    Bifrost: new modular character rigging framework

    Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch.
    The release is compatibility-breaking, and does not work with earlier versions of the toolset.
    The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”.
    Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes.

    Bifrost: improvements to liquid simulation and workflow
    Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation.
    The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions.
    In addition, a new parameter controls air drag on foam and spray thrown out by a liquid.
    Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”.

    LookdevX: support for OpenPBR in FBX files
    LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated.
    Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024.
    To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers.
    LookdevX 1.8 also features a number of workflow improvements, particularly on macOS.
    USD for Maya: workflow improvements

    USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport.
    Arnold for Maya: performance improvements

    Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores.
    Maya Creative 2026.1 also released

    Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya.
    Price and system requirements

    Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026.
    In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year.
    Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year.
    Read a full list of new features in Maya 2026.1 in the online documentation

    Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    #autodesk #adds #animation #tool #motionmaker
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "; A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additionalstyles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost /month or /year, up a further /month or /year since the release of Maya 2026. In many countries, artists earning under /year and working on projects valued at under /year, qualify for Maya Indie subscriptions, now priced at /year. Maya Creative is available pay-as-you-go, with prices starting at /day, and a minimum spend of /year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X. As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects. #autodesk #adds #animation #tool #motionmaker
    WWW.CGCHANNEL.COM
    Autodesk adds AI animation tool MotionMaker to Maya 2026.1
    html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd" A still from a demo shot created using MotionMaker, the new generative AI toolset introduced in Maya 2026.1 for roughing out movement animations. Autodesk has released Maya 2026.1, the latest version of its 3D modeling and animation software for visual effects, games and motion graphics work.The release adds MotionMaker, a new AI-based system for generating movement animations for biped and quadruped characters, especially for previs and layout work. Other changes include a new modular character rigging framework inside Bifrost for Maya, plus updates to liquid simulation, OpenPBR support and USD workflows. Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya for smaller studios. MotionMaker: new generative AI tool roughs out movement animations The headline feature in Maya 2026.1 is MotionMaker: a new generative animation system.It lets users “create natural character movements in minutes instead of hours”, using a workflow more “like giving stage directions to a digital actor” than traditional animation. Users set keys for a character’s start and end positions, or create a guide path in the viewport, and MotionMaker automatically generates the motion in between. At the minute, that mainly means locomotion cycles, for both bipeds and quadrupeds, plus a few other movements, like jumping or sitting. Although MotionMaker is designed for “anyone in the animation pipeline”, the main initial use cases seem to be layout and previs rather than hero animation. Its output is also intended to be refined manually – Autodesk’s promotional material describes it as getting users “80% of the way there” for “certain types of shots”. Accordingly, MotionMaker comes with its own Editor window, which provides access to standard Maya animation editing tools. Users can layer in animation from other sources, including motion capture or keyframe animation retargeted from other characters: to add upper body movements, for example. There are a few more MotionMaker-specific controls: the video above shows speed ramping, to control the time it takes the character to travel between two points. There is also a Character Scale setting, which determines how a character’s size and weight is expressed through the animation generated. You can read more about the design and aims of MotionMaker in a Q&A with Autodesk Senior Principal Research Scientist Evan Atherton on Autodesk’s blog. According to Atherton, the AI models were trained using motion capture data “specifically collected for this tool”. That includes source data from male and female human performers, plus wolf-style dogs, although the system is “designed to support additional [motion] styles” in future. Bifrost: new modular character rigging framework Character artists and animators also get a new modular rigging framework in Bifrost.Autodesk has been teasing new character rigging capabilities in the node-based framework for building effects since Maya 2025.1, but this seems to be its official launch. The release is compatibility-breaking, and does not work with earlier versions of the toolset. The new Rigging Module Framework is described as a “modular, compound-based system for building … production-ready rigs”, and is “fully integrated with Maya”. Animators can “interact with module inputs and outputs directly from the Maya scene”, and rigs created with Bifrost can be converted into native Maya controls, joints and attributes. Bifrost: improvements to liquid simulation and workflow Bifrost 2.14 for Maya also features improvements to Bifrost’s existing functionality, particularly liquid simulation. The properties of collider objects, like bounciness, stickiness and roughness, can now influence liquid behavior in the same way they do particle behavior and other collisions. In addition, a new parameter controls air drag on foam and spray thrown out by a liquid. Workflow improvements include the option to convert Bifrost curves to Maya scene curves, and batch execution, to write out cache files “without the risk of accidentally overwriting them”. LookdevX: support for OpenPBR in FBX files LookdevX, Maya’s plugin for creating USD shading graphs, has also been updated. Autodesk introduced support for OpenPBR, the open material standard intended as a unified successor to the Autodesk Standard Surface and Adobe Standard Material, in 2024. To that, the latest update adds support for OpenPBR materials in FBX files, making it possible to import or export them from other applications that support OpenPBR: at the minute, 3ds Max plus some third-party renderers. LookdevX 1.8 also features a number of workflow improvements, particularly on macOS. USD for Maya: workflow improvements USD for Maya, the software’s USD plugin, also gets workflow improvements, with USD for Maya 0.32 adding support for animation curves for camera attributes in exports.Other changes include support for MaterialX documents and better representation of USD lights in the viewport. Arnold for Maya: performance improvements Maya’s integration plugin for Autodesk’s Arnold renderer has also been updated, with MtoA 5.5.2 supporting the changes in Arnold 7.4.2.They’re primarily performance improvements, especially to scene initialization times when rendering on machines with high numbers of CPU cores. Maya Creative 2026.1 also released Autodesk has also released Maya Creative 2026.1, the corresponding update to the cut-down edition of Maya aimed at smaller studios, and available on a pay-as-you-go basis.It includes most of the new features from Maya 2026.1, including MotionMaker, but does not include Bifrost for Maya. Price and system requirements Maya 2026.1 is available for Windows 10+, RHEL and Rocky Linux 8.10/9.3/9.5, and macOS 13.0+.The software is rental-only. Subscriptions cost $255/month or $2,010/year, up a further $10/month or $65/year since the release of Maya 2026. In many countries, artists earning under $100,000/year and working on projects valued at under $100,000/year, qualify for Maya Indie subscriptions, now priced at $330/year. Maya Creative is available pay-as-you-go, with prices starting at $3/day, and a minimum spend of $300/year. Read a full list of new features in Maya 2026.1 in the online documentation Have your say on this story by following CG Channel on Facebook, Instagram and X (formerly Twitter). As well as being able to comment on stories, followers of our social media accounts can see videos we don’t post on the site itself, including making-ofs for the latest VFX movies, animations, games cinematics and motion graphics projects.
    Like
    Love
    Wow
    Sad
    Angry
    498
    0 Comments 0 Shares