• The AI execution gap: Why 80% of projects don’t reach production

    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.
    #execution #gap #why #projects #dont
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. #execution #gap #why #projects #dont
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.(Image source: Unsplash)
    Like
    Love
    Wow
    Angry
    Sad
    598
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon

    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations.
    Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered.
    Frontiers: What inspired you to become a researcher?
    Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved.
    F: Can you tell us about the research you’re currently working on?
    BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation.
    Local boating the Amazon River. CREDIT: Beatriz Cosendey.
    F: Could you tell us about one of the legends surrounding anacondas?
    BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty.
    F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity?
    BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently.
    A giant anaconda is being measured. Credit: Pedro Calazans.
    F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play?
    BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?”
    For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste.
    One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey.
    Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey.
    We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals.
    F: Are there any common misconceptions about this area of research? How would you address them?
    BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data.
    However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework.
    To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society.
    The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey.
    F: What are some of the areas of research you’d like to see tackled in the years ahead?
    BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere.
    F: How has open science benefited the reach and impact of your research?
    BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups.
    The Q&A can also be read here.
    #qampampa #how #anacondas #chickens #locals
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here. #qampampa #how #anacondas #chickens #locals
    WWW.POPSCI.COM
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals (up to around 2–2.5 meters), while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is her [the anaconda’s] favorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh (to block smaller animals), and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here.
    Like
    Love
    Wow
    Sad
    Angry
    443
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • Four science-based rules that will make your conversations flow

    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy
    Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats.
    David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance?
    Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail.

    Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from?
    I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently.

    Receive a weekly dose of discovery in your inbox.

    Sign up to newsletter

    The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too.
    “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about?
    My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different.
    Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos
    What’s your advice when making these decisions?
    There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else.
    After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it.
    The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is?
    Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already.

    What kinds of questions should we be asking – and avoiding?
    In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful.
    There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it.
    Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno
    What are the benefits of levity?
    When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again.
    Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room.
    Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian?
    Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud.

    This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like?
    Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it.
    Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone?
    Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far.
    Topics:
    #four #sciencebased #rules #that #will
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, andall get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomansand I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics: #four #sciencebased #rules #that #will
    WWW.NEWSCIENTIST.COM
    Four science-based rules that will make your conversations flow
    One of the four pillars of good conversation is levity. You needn’t be a comedian, you can but have some funTetra Images, LLC/Alamy Conversation lies at the heart of our relationships – yet many of us find it surprisingly hard to talk to others. We may feel anxious at the thought of making small talk with strangers and struggle to connect with the people who are closest to us. If that sounds familiar, Alison Wood Brooks hopes to help. She is a professor at Harvard Business School, where she teaches an oversubscribed course called “TALK: How to talk gooder in business and life”, and the author of a new book, Talk: The science of conversation and the art of being ourselves. Both offer four key principles for more meaningful exchanges. Conversations are inherently unpredictable, says Wood Brooks, but they follow certain rules – and knowing their architecture makes us more comfortable with what is outside of our control. New Scientist asked her about the best ways to apply this research to our own chats. David Robson: Talking about talking feels quite meta. Do you ever find yourself critiquing your own performance? Alison Wood Brooks: There are so many levels of “meta-ness”. I have often felt like I’m floating over the room, watching conversations unfold, even as I’m involved in them myself. I teach a course at Harvard, and [my students] all get to experience this feeling as well. There can be an uncomfortable period of hypervigilance, but I hope that dissipates over time as they develop better habits. There is a famous quote from Charlie Parker, who was a jazz saxophonist. He said something like, “Practise, practise, practise, and then when you get on stage, let it all go and just wail.” I think that’s my approach to conversation. Even when you’re hyper-aware of conversation dynamics, you have to remember the true delight of being with another human mind, and never lose the magic of being together. Think ahead, but once you’re talking, let it all go and just wail. Reading your book, I learned that a good way to enliven a conversation is to ask someone why they are passionate about what they do. So, where does your passion for conversation come from? I have two answers to this question. One is professional. Early in my professorship at Harvard, I had been studying emotions by exploring how people talk about their feelings and the balance between what we feel inside and how we express that to others. And I realised I just had this deep, profound interest in figuring out how people talk to each other about everything, not just their feelings. We now have scientific tools that allow us to capture conversations and analyse them at large scale. Natural language processing, machine learning, the advent of AI – all this allows us to take huge swathes of transcript data and process it much more efficiently. Receive a weekly dose of discovery in your inbox. Sign up to newsletter The personal answer is that I’m an identical twin, and I spent my whole life, from the moment I opened my newborn eyes, existing next to a person who’s an exact copy of myself. It was like observing myself at very close range, interacting with the world, interacting with other people. I could see when she said and did things well, and I could try to do that myself. And I saw when her jokes failed, or she stumbled over her words – I tried to avoid those mistakes. It was a very fortunate form of feedback that not a lot of people get. And then, as a twin, you’ve got this person sharing a bedroom, sharing all your clothes, going to all the same parties and playing on the same sports teams, so we were just constantly in conversation with each other. You reached this level of shared reality that is so incredible, and I’ve spent the rest of my life trying to help other people get there in their relationships, too. “TALK” cleverly captures your framework for better conversations: topics, asking, levity and kindness. Let’s start at the beginning. How should we decide what to talk about? My first piece of advice is to prepare. Some people do this naturally. They already think about the things that they should talk about with somebody before they see them. They should lean into this habit. Some of my students, however, think it’s crazy. They think preparation will make the conversation seem rigid and forced and overly scripted. But just because you’ve thought ahead about what you might talk about doesn’t mean you have to talk about those things once the conversation is underway. It does mean, however, that you always have an idea waiting for you when you’re not sure what to talk about next. Having just one topic in your back pocket can help you in those anxiety-ridden moments. It makes things more fluent, which is important for establishing a connection. Choosing a topic is not only important at the start of a conversation. We’re constantly making decisions about whether we should stay on one subject, drift to something else or totally shift gears and go somewhere wildly different. Sometimes the topic of conversation is obvious. Even then, knowing when to switch to a new one can be trickyMartin Parr/Magnum Photos What’s your advice when making these decisions? There are three very clear signs that suggest that it’s time to switch topics. The first is longer mutual pauses. The second is more uncomfortable laughter, which we use to fill the space that we would usually fill excitedly with good content. And the third sign is redundancy. Once you start repeating things that have already been said on the topic, it’s a sign that you should move to something else. After an average conversation, most people feel like they’ve covered the right number of topics. But if you ask people after conversations that didn’t go well, they’ll more often say that they didn’t talk about enough things, rather than that they talked about too many things. This suggests that a common mistake is lingering too long on a topic after you’ve squeezed all the juice out of it. The second element of TALK is asking questions. I think a lot of us have heard the advice to ask more questions, yet many people don’t apply it. Why do you think that is? Many years of research have shown that the human mind is remarkably egocentric. Often, we are so focused on our own perspective that we forget to even ask someone else to share what’s in their mind. Another reason is fear. You’re interested in the other person, and you know you should ask them questions, but you’re afraid of being too intrusive, or that you will reveal your own incompetence, because you feel you should know the answer already. What kinds of questions should we be asking – and avoiding? In the book, I talk about the power of follow-up questions that build on anything that your partner has just said. It shows that you heard them, that you care and that you want to know more. Even one follow-up question can springboard us away from shallow talk into something deeper and more meaningful. There are, however, some bad patterns of question asking, such as “boomerasking”. Michael Yeomans [at Imperial College London] and I have a recent paper about this, and oh my gosh, it’s been such fun to study. It’s a play on the word boomerang: it comes back to the person who threw it. If I ask you what you had for breakfast, and you tell me you had Special K and banana, and then I say, “Well, let me tell you about my breakfast, because, boy, was it delicious” – that’s boomerasking. Sometimes it’s a thinly veiled way of bragging or complaining, but sometimes I think people are genuinely interested to hear from their partner, but then the partner’s answer reminds them so much of their own life that they can’t help but start sharing their perspective. In our research, we have found that this makes your partner feel like you weren’t interested in their perspective, so it seems very insincere. Sharing your own perspective is important. It’s okay at some point to bring the conversation back to yourself. But don’t do it so soon that it makes your partner feel like you didn’t hear their answer or care about it. Research by Alison Wood Brooks includes a recent study on “boomerasking”, a pitfall you should avoid to make conversations flowJanelle Bruno What are the benefits of levity? When we think of conversations that haven’t gone well, we often think of moments of hostility, anger or disagreement, but a quiet killer of conversation is boredom. Levity is the antidote. These small moments of sparkle or fizz can pull us back in and make us feel engaged with each other again. Our research has shown that we give status and respect to people who make us feel good, so much so that in a group of people, a person who can land even one appropriate joke is more likely to be voted as the leader. And the joke doesn’t even need to be very funny! It’s the fact that they were confident enough to try it and competent enough to read the room. Do you have any practical steps that people can apply to generate levity, even if they’re not a natural comedian? Levity is not just about being funny. In fact, aiming to be a comedian is not the right goal. When we watch stand-up on Netflix, comedians have rehearsed those jokes and honed them and practised them for a long time, and they’re delivering them in a monologue to an audience. It’s a completely different task from a live conversation. In real dialogue, what everybody is looking for is to feel engaged, and that doesn’t require particularly funny jokes or elaborate stories. When you see opportunities to make it fun or lighten the mood, that’s what you need to grab. It can come through a change to a new, fresh topic, or calling back to things that you talked about earlier in the conversation or earlier in your relationship. These callbacks – which sometimes do refer to something funny – are such a nice way of showing that you’ve listened and remembered. A levity move could also involve giving sincere compliments to other people. When you think nice things, when you admire someone, make sure you say it out loud. This brings us to the last element of TALK: kindness. Why do we so often fail to be as kind as we would like? Wobbles in kindness often come back to our egocentrism. Research shows that we underestimate how much other people’s perspectives differ from our own, and we forget that we have the tools to ask other people directly in conversation for their perspective. Being a kinder conversationalist is about trying to focus on your partner’s perspective and then figuring what they need and helping them to get it. Finally, what is your number one tip for readers to have a better conversation the next time they speak to someone? Every conversation is surprisingly tricky and complex. When things don’t go perfectly, give yourself and others more grace. There will be trips and stumbles and then a little grace can go very, very far. Topics:
    Like
    Love
    Wow
    Sad
    Angry
    522
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • Ansys: R&D Engineer II (Remote - East Coast, US)

    Requisition #: 16890 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The R&D Engineer II contributes to the development of software products and supporting systems. In this role, the R&D Engineer II will collaborate with a team of expert professionals to understand customer requirements and accomplish development objectives. Key Duties and Responsibilities Performs moderately complex development activities, including the design, implementation, maintenance, testing and documentation of software modules and sub-systems Understands and employs best practices Performs moderately complex bug verification, release testing and beta support for assigned products. Researches problems discovered by QA or product support and develops solutions Understands the marketing requirements for a product, including target environment, performance criteria and competitive issues Works under the general supervision of a development manager Minimum Education/Certification Requirements and Experience BS in Computer Science, Applied Mathematics, Engineering, or other natural science disciplines with 3-5 years' experience or MS with minimum 2 years experience Working experience within technical software development proven by academic, research, or industry projects. Good understanding and skills in object-oriented programming Experience with Java and C# / .NET Role can be remote, must be based on the East Coast due to timezone Preferred Qualifications and Skills Experience with C++, Python, in addition to Java and C# / .NET Knowledge of Task-Based Asynchronous design patternExposure to model-based systems engineering concepts Working knowledge of SysML Know-how on cloud computing technologies like micro-service architectures, RPC frameworks, REST APIs, etc. Knowledge of software security best practices Experience working on an Agile software development team Technical knowledge and experience with various engineering tools and methodologies, such as Finite Element simulation, CAD modeling, and Systems Architecture modelling is a plus Ability to assist more junior developers on an as-needed basis Ability to learn quickly and to collaborate with others in a geographically distributed team Excellent communication and interpersonal skills At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified.For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    #ansys #rampampd #engineer #remote #east
    Ansys: R&D Engineer II (Remote - East Coast, US)
    Requisition #: 16890 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The R&D Engineer II contributes to the development of software products and supporting systems. In this role, the R&D Engineer II will collaborate with a team of expert professionals to understand customer requirements and accomplish development objectives. Key Duties and Responsibilities Performs moderately complex development activities, including the design, implementation, maintenance, testing and documentation of software modules and sub-systems Understands and employs best practices Performs moderately complex bug verification, release testing and beta support for assigned products. Researches problems discovered by QA or product support and develops solutions Understands the marketing requirements for a product, including target environment, performance criteria and competitive issues Works under the general supervision of a development manager Minimum Education/Certification Requirements and Experience BS in Computer Science, Applied Mathematics, Engineering, or other natural science disciplines with 3-5 years' experience or MS with minimum 2 years experience Working experience within technical software development proven by academic, research, or industry projects. Good understanding and skills in object-oriented programming Experience with Java and C# / .NET Role can be remote, must be based on the East Coast due to timezone Preferred Qualifications and Skills Experience with C++, Python, in addition to Java and C# / .NET Knowledge of Task-Based Asynchronous design patternExposure to model-based systems engineering concepts Working knowledge of SysML Know-how on cloud computing technologies like micro-service architectures, RPC frameworks, REST APIs, etc. Knowledge of software security best practices Experience working on an Agile software development team Technical knowledge and experience with various engineering tools and methodologies, such as Finite Element simulation, CAD modeling, and Systems Architecture modelling is a plus Ability to assist more junior developers on an as-needed basis Ability to learn quickly and to collaborate with others in a geographically distributed team Excellent communication and interpersonal skills At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified.For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot #ansys #rampampd #engineer #remote #east
    WEWORKREMOTELY.COM
    Ansys: R&D Engineer II (Remote - East Coast, US)
    Requisition #: 16890 Our Mission: Powering Innovation That Drives Human Advancement When visionary companies need to know how their world-changing ideas will perform, they close the gap between design and reality with Ansys simulation. For more than 50 years, Ansys software has enabled innovators across industries to push boundaries by using the predictive power of simulation. From sustainable transportation to advanced semiconductors, from satellite systems to life-saving medical devices, the next great leaps in human advancement will be powered by Ansys. Innovate With Ansys, Power Your Career. Summary / Role Purpose The R&D Engineer II contributes to the development of software products and supporting systems. In this role, the R&D Engineer II will collaborate with a team of expert professionals to understand customer requirements and accomplish development objectives. Key Duties and Responsibilities Performs moderately complex development activities, including the design, implementation, maintenance, testing and documentation of software modules and sub-systems Understands and employs best practices Performs moderately complex bug verification, release testing and beta support for assigned products. Researches problems discovered by QA or product support and develops solutions Understands the marketing requirements for a product, including target environment, performance criteria and competitive issues Works under the general supervision of a development manager Minimum Education/Certification Requirements and Experience BS in Computer Science, Applied Mathematics, Engineering, or other natural science disciplines with 3-5 years' experience or MS with minimum 2 years experience Working experience within technical software development proven by academic, research, or industry projects. Good understanding and skills in object-oriented programming Experience with Java and C# / .NET Role can be remote, must be based on the East Coast due to timezone Preferred Qualifications and Skills Experience with C++, Python, in addition to Java and C# / .NET Knowledge of Task-Based Asynchronous design pattern (TAP) Exposure to model-based systems engineering concepts Working knowledge of SysML Know-how on cloud computing technologies like micro-service architectures, RPC frameworks (e.g., gRPC), REST APIs, etc. Knowledge of software security best practices Experience working on an Agile software development team Technical knowledge and experience with various engineering tools and methodologies, such as Finite Element simulation, CAD modeling, and Systems Architecture modelling is a plus Ability to assist more junior developers on an as-needed basis Ability to learn quickly and to collaborate with others in a geographically distributed team Excellent communication and interpersonal skills At Ansys, we know that changing the world takes vision, skill, and each other. We fuel new ideas, build relationships, and help each other realize our greatest potential. We are ONE Ansys. We operate on three key components: our commitments to stakeholders, our values that guide how we work together, and our actions to deliver results. As ONE Ansys, we are powering innovation that drives human advancement Our Commitments:Amaze with innovative products and solutionsMake our customers incredibly successfulAct with integrityEnsure employees thrive and shareholders prosper Our Values:Adaptability: Be open, welcome what's nextCourage: Be courageous, move forward passionatelyGenerosity: Be generous, share, listen, serveAuthenticity: Be you, make us stronger Our Actions:We commit to audacious goalsWe work seamlessly as a teamWe demonstrate masteryWe deliver outstanding resultsVALUES IN ACTION Ansys is committed to powering the people who power human advancement. We believe in creating and nurturing a workplace that supports and welcomes people of all backgrounds; encouraging them to bring their talents and experience to a workplace where they are valued and can thrive. Our culture is grounded in our four core values of adaptability, courage, generosity, and authenticity. Through our behaviors and actions, these values foster higher team performance and greater innovation for our customers. We're proud to offer programs, available to all employees, to further impact innovation and business outcomes, such as employee networks and learning communities that inform solutions for our globally minded customer base. WELCOME WHAT'S NEXT IN YOUR CAREER AT ANSYS At Ansys, you will find yourself among the sharpest minds and most visionary leaders across the globe. Collectively, we strive to change the world with innovative technology and transformational solutions. With a prestigious reputation in working with well-known, world-class companies, standards at Ansys are high - met by those willing to rise to the occasion and meet those challenges head on. Our team is passionate about pushing the limits of world-class simulation technology, empowering our customers to turn their design concepts into successful, innovative products faster and at a lower cost. Ready to feel inspired? Check out some of our recent customer stories, here and here . At Ansys, it's about the learning, the discovery, and the collaboration. It's about the "what's next" as much as the "mission accomplished." And it's about the melding of disciplined intellect with strategic direction and results that have, can, and do impact real people in real ways. All this is forged within a working environment built on respect, autonomy, and ethics.CREATING A PLACE WE'RE PROUD TO BEAnsys is an S&P 500 company and a member of the NASDAQ-100. We are proud to have been recognized for the following more recent awards, although our list goes on: Newsweek's Most Loved Workplace globally and in the U.S., Gold Stevie Award Winner, America's Most Responsible Companies, Fast Company World Changing Ideas, Great Place to Work Certified (China, Greece, France, India, Japan, Korea, Spain, Sweden, Taiwan, and U.K.).For more information, please visit us at Ansys is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other protected characteristics.Ansys does not accept unsolicited referrals for vacancies, and any unsolicited referral will become the property of Ansys. Upon hire, no fee will be owed to the agency, person, or entity.Apply NowLet's start your dream job Apply now Meet JobCopilot: Your Personal AI Job HunterAutomatically Apply to Remote Full-Stack Programming JobsJust set your preferences and Job Copilot will do the rest-finding, filtering, and applying while you focus on what matters. Activate JobCopilot
    Like
    Love
    Wow
    Sad
    Angry
    468
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • The Role of the 3-2-1 Backup Rule in Cybersecurity

    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #role #backup #rule #cybersecurity
    The Role of the 3-2-1 Backup Rule in Cybersecurity
    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #role #backup #rule #cybersecurity
    WWW.INFORMATIONWEEK.COM
    The Role of the 3-2-1 Backup Rule in Cybersecurity
    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US $639 billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    519
    2 Yorumlar 0 hisse senetleri 0 önizleme
  • NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs

    Generative AI has reshaped how people create, imagine and interact with digital content.
    As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well.
    By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4.
    NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance.
    In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers.
    RTX-Accelerated AI
    NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs.
    Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution.
    To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one.
    SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs.
    FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup.
    Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch.
    The optimized models are now available on Stability AI’s Hugging Face page.
    NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July.
    TensorRT for RTX SDK Released
    Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers.
    Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time.
    With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature.
    The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview.
    For more details, read this NVIDIA technical blog and this Microsoft Build recap.
    Join NVIDIA at GTC Paris
    At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay.
    GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event.
    Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations. 
    Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter.
    Follow NVIDIA Workstation on LinkedIn and X. 
    See notice regarding software product information.
    #nvidia #tensorrt #boosts #stable #diffusion
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kitdouble performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time, on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8generates images in half the time with similar quality as FP16. Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information. #nvidia #tensorrt #boosts #stable #diffusion
    BLOGS.NVIDIA.COM
    NVIDIA TensorRT Boosts Stable Diffusion 3.5 Performance on NVIDIA GeForce RTX and RTX PRO GPUs
    Generative AI has reshaped how people create, imagine and interact with digital content. As AI models continue to grow in capability and complexity, they require more VRAM, or video random access memory. The base Stable Diffusion 3.5 Large model, for example, uses over 18GB of VRAM — limiting the number of systems that can run it well. By applying quantization to the model, noncritical layers can be removed or run with lower precision. NVIDIA GeForce RTX 40 Series and the Ada Lovelace generation of NVIDIA RTX PRO GPUs support FP8 quantization to help run these quantized models, and the latest-generation NVIDIA Blackwell GPUs also add support for FP4. NVIDIA collaborated with Stability AI to quantize its latest model, Stable Diffusion (SD) 3.5 Large, to FP8 — reducing VRAM consumption by 40%. Further optimizations to SD3.5 Large and Medium with the NVIDIA TensorRT software development kit (SDK) double performance. In addition, TensorRT has been reimagined for RTX AI PCs, combining its industry-leading performance with just-in-time (JIT), on-device engine building and an 8x smaller package size for seamless AI deployment to more than 100 million RTX AI PCs. TensorRT for RTX is now available as a standalone SDK for developers. RTX-Accelerated AI NVIDIA and Stability AI are boosting the performance and reducing the VRAM requirements of Stable Diffusion 3.5, one of the world’s most popular AI image models. With NVIDIA TensorRT acceleration and quantization, users can now generate and edit images faster and more efficiently on NVIDIA RTX GPUs. Stable Diffusion 3.5 quantized FP8 (right) generates images in half the time with similar quality as FP16 (left). Prompt: A serene mountain lake at sunrise, crystal clear water reflecting snow-capped peaks, lush pine trees along the shore, soft morning mist, photorealistic, vibrant colors, high resolution. To address the VRAM limitations of SD3.5 Large, the model was quantized with TensorRT to FP8, reducing the VRAM requirement by 40% to 11GB. This means five GeForce RTX 50 Series GPUs can run the model from memory instead of just one. SD3.5 Large and Medium models were also optimized with TensorRT, an AI backend for taking full advantage of Tensor Cores. TensorRT optimizes a model’s weights and graph — the instructions on how to run a model — specifically for RTX GPUs. FP8 TensorRT boosts SD3.5 Large performance by 2.3x vs. BF16 PyTorch, with 40% less memory use. For SD3.5 Medium, BF16 TensorRT delivers a 1.7x speedup. Combined, FP8 TensorRT delivers a 2.3x performance boost on SD3.5 Large compared with running the original models in BF16 PyTorch, while using 40% less memory. And in SD3.5 Medium, BF16 TensorRT provides a 1.7x performance increase compared with BF16 PyTorch. The optimized models are now available on Stability AI’s Hugging Face page. NVIDIA and Stability AI are also collaborating to release SD3.5 as an NVIDIA NIM microservice, making it easier for creators and developers to access and deploy the model for a wide range of applications. The NIM microservice is expected to be released in July. TensorRT for RTX SDK Released Announced at Microsoft Build — and already available as part of the new Windows ML framework in preview — TensorRT for RTX is now available as a standalone SDK for developers. Previously, developers needed to pre-generate and package TensorRT engines for each class of GPU — a process that would yield GPU-specific optimizations but required significant time. With the new version of TensorRT, developers can create a generic TensorRT engine that’s optimized on device in seconds. This JIT compilation approach can be done in the background during installation or when they first use the feature. The easy-to-integrate SDK is now 8x smaller and can be invoked through Windows ML — Microsoft’s new AI inference backend in Windows. Developers can download the new standalone SDK from the NVIDIA Developer page or test it in the Windows ML preview. For more details, read this NVIDIA technical blog and this Microsoft Build recap. Join NVIDIA at GTC Paris At NVIDIA GTC Paris at VivaTech — Europe’s biggest startup and tech event — NVIDIA founder and CEO Jensen Huang yesterday delivered a keynote address on the latest breakthroughs in cloud AI infrastructure, agentic AI and physical AI. Watch a replay. GTC Paris runs through Thursday, June 12, with hands-on demos and sessions led by industry leaders. Whether attending in person or joining online, there’s still plenty to explore at the event. Each week, the RTX AI Garage blog series features community-driven AI innovations and content for those looking to learn more about NVIDIA NIM microservices and AI Blueprints, as well as building AI agents, creative workflows, digital humans, productivity apps and more on AI PCs and workstations.  Plug in to NVIDIA AI PC on Facebook, Instagram, TikTok and X — and stay informed by subscribing to the RTX AI PC newsletter. Follow NVIDIA Workstation on LinkedIn and X.  See notice regarding software product information.
    Like
    Love
    Wow
    Sad
    Angry
    482
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • Game Dev Digest Issue #286 - Design Tricks, Deep Dives, and more

    This article was originally published on GameDevDigest.comEnjoy!What was Radiant AI, anyway? - A ridiculously deep dive into Oblivion's controversial AI system and its legacyblog.paavo.meConsider The Horse Game - No I don’t think every dev should make a horse game. But I do think every developer should at least look at them, maybe even play one because, it is very important that you understand the importance of genre, fandom, and how visibility works. Even if you are not making a horse game, the lessons you can learn by looking at this sub genre are very similar to other genres, just not as blatantly clear as they are with horse games.howtomarketagame.comMaking a killing: The playful 2D terror of Psycasso® - I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.UnityIntroduction to Asset Manager transfer methods in Unity - Unity's Asset Manager is a user-friendly digital asset management platform supporting over 70 file formats to help teams centralize, organize, discover, and use assets seamlessly across projects. It reduces redundant work by design, making cross-team collaboration smoother and accelerating production workflows.UnityVideosRules of the Game: Five Tricks of Highly Effective Designers - Every working designer has them: unique techniques or "tricks" that they use when crafting gameplay. Sure, there's the general game design wisdom that everyone agrees on and can be found in many a game design book, but experienced game designers often have very specific rules that are personal to them, techniques that not everyone knows about or even agrees with. In this GDC 2015 session, five experienced game designers join the stage for 10 minutes each to share one game design "trick" that they use.Game Developers ConferenceBinding of Isaac Style Room Generator in Unity- Our third part in the series - making the rooms!Game Dev GarnetIntroduction to Unity Behavior | Unity Tutorial - In this video you'll become familiar with the core concepts of Unity Behavior, including a live example.LlamAcademyHow I got my demo ready for Steam Next Fest - It's Steam Next Fest, and I've got a game in the showcase. So here are 7 tips for making the most of this demo sharing festival.Game Maker's ToolkitOptimizing lighting in Projekt Z: Beyond Order - 314 Arts studio lead and founder Justin Miersch discuss how the team used the Screen Space Global Illumination feature in Unity’s High Definition Render Pipeline, along with the Unity Profiler and Timeline to overcome the lighting challenges they faced in building Projekt Z: Beyond Order.UnityMemory Arenas in Unity: Heap Allocation Without the GC - In this video, we explore how to build a custom memory arena in Unity using unsafe code and manual heap allocation. You’ll learn how to allocate raw memory for temporary graph-like structures—such as crafting trees or decision planners—without triggering the garbage collector. We’ll walk through the concept of stack frames, translate that to heap-based arena allocation, and implement a fast, disposable system that gives you full control over memory layout and lifetime. Perfect for performance-critical systems where GC spikes aren’t acceptable.git-amendCloth Animation Using The Compute Shader - In this video, we dive into cloth simulation using OpenGL compute shaders. By applying simple mathematical equations, we’ll achieve smooth, dynamic movement. We'll explore particle-based simulation, tackle synchronization challenges with double buffering, and optimize rendering using triangle strips for efficient memory usage. Whether you're familiar with compute shaders or just getting started, this is the perfect way to step up your real-time graphics skills!OGLDEVHow we're designing games for a broader audience - Our games are too hardBiteMe GamesAssetsLearn Game Dev - Unity, Godot, Unreal, Gamemaker, Blender & C# - Make games like a pro.Passionate about video games? Then start making your own! Our latest bundle will help you learn vital game development skills. Master the most popular creation platforms like Unity, Godot, Unreal, GameMaker, Blender, and C#—now that’s a sharp-lookin’ bundle! Build a 2.5D farming RPG with Unreal Engine, create a micro turn-based RPG in Godot, explore game optimization, and so much more.__Big Bang Unreal & Unity Asset Packs Bundle - 5000+ unrivaled assets in one bundle. Calling all game devs—build your worlds with this gigantic bundle of over 5000 assets, including realistic and stylized environments, SFX packs, and powerful tools. Perfect for hobbyists, beginners, and professional developers alike, you'll gain access to essential resources, tutorials, and beta-testing–ready content to start building immediately. The experts at Leartes Studios have curated an amazing library packed with value, featuring environments, VFX packs, and tutorial courses on Unreal Engine, Blender, Substance Painter, and ZBrush. Get the assets you need to bring your game to life—and help support One Tree Planted with your purchase! This bundle provides Unity Asset Store keys directly with your purchase, and FAB keys via redemption through Cosmos, if the product is available on those platforms.Humble Bundle AffiliateGameplay Tools 50% Off - Core systems, half the price. Get pro-grade tools to power your gameplay—combat, cutscenes, UI, and more. Including: HTrace: World Space Global Illumination, VFX Graph - Ultra Mega Pack - Vol.1, Magic Animation Blend, Utility Intelligence: Utility AI Framework for Unity 6, Build for iOS/macOS on Windows>?Unity AffiliateHi guys, I created a website about 6 years in which I host all my field recordings and foley sounds. All free to download and use CC0. There is currently 50+ packs with 1000's of sounds and hours of field recordings all perfect for game SFX and UI. - I think game designers can benefit from a wide range of sounds on the site, especially those that enhance immersion and atmosphere.signaturesounds.orgSmartAddresser - Automate Addressing, Labeling, and Version Control for Unity's Addressable Asset System.CyberAgentGameEntertainment Open SourceEasyCS - EasyCS is an easy-to-use and flexible framework for Unity, adopting a Data-Driven Entity & Actor-Component approach. It bridges Unity's classic OOP with powerful data-oriented patterns, without forcing a complete ECS paradigm shift or a mindset change. Build smarter, not harder.Watcher3056 Open SourceBinding-Of-Isaac_Map-Generator - Binding of Isaac map generator for Unity2DGarnetKane99 Open SourceHelion - A modern fast paced Doom FPS engineHelion-Engine Open SourcePixelationFx - Pixelation post effect for Unity UrpNullTale Open SourceExtreme Add-Ons Bundle For Blender & ZBrush - Extraordinary quality—Extreme add-ons Get quality add-ons for Blender and ZBrush with our latest bundle! We’ve teamed up with the pros at FlippedNormals to deliver a gigantic library of powerful tools for your next game development project. Add new life to your creative work with standout assets like Real-time Hair ZBrush Plugin, Physical Starlight and Atmosphere, Easy Mesh ZBrush Plugin, and more. Get the add-ons you need to bring color and individuality to your next project—and help support Extra Life with your purchase!Humble Bundle AffiliateShop up to 50% off Gabriel Aguiar Prod - Publisher Sale - Gabriel Aguiar Prod. is best known for his extensive VFX assets that help many developers prototype and ship games with special effects. His support and educational material are also invaluable resources for the game dev community. PLUS get VFX Graph - Stylized Fire - Vol. 1 for FREE with code GAP2025Unity AffiliateSpotlightDream Garden - Dream Garden is a simulation game about building tiny cute garden dioramas. A large selection of tools, plants, decorations and customization awaits you. Try all of them and create your dream garden.Campfire StudioMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.
    #game #dev #digest #issue #design
    Game Dev Digest Issue #286 - Design Tricks, Deep Dives, and more
    This article was originally published on GameDevDigest.comEnjoy!What was Radiant AI, anyway? - A ridiculously deep dive into Oblivion's controversial AI system and its legacyblog.paavo.meConsider The Horse Game - No I don’t think every dev should make a horse game. But I do think every developer should at least look at them, maybe even play one because, it is very important that you understand the importance of genre, fandom, and how visibility works. Even if you are not making a horse game, the lessons you can learn by looking at this sub genre are very similar to other genres, just not as blatantly clear as they are with horse games.howtomarketagame.comMaking a killing: The playful 2D terror of Psycasso® - I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a freshtwist.UnityIntroduction to Asset Manager transfer methods in Unity - Unity's Asset Manager is a user-friendly digital asset management platform supporting over 70 file formats to help teams centralize, organize, discover, and use assets seamlessly across projects. It reduces redundant work by design, making cross-team collaboration smoother and accelerating production workflows.UnityVideosRules of the Game: Five Tricks of Highly Effective Designers - Every working designer has them: unique techniques or "tricks" that they use when crafting gameplay. Sure, there's the general game design wisdom that everyone agrees on and can be found in many a game design book, but experienced game designers often have very specific rules that are personal to them, techniques that not everyone knows about or even agrees with. In this GDC 2015 session, five experienced game designers join the stage for 10 minutes each to share one game design "trick" that they use.Game Developers ConferenceBinding of Isaac Style Room Generator in Unity- Our third part in the series - making the rooms!Game Dev GarnetIntroduction to Unity Behavior | Unity Tutorial - In this video you'll become familiar with the core concepts of Unity Behavior, including a live example.LlamAcademyHow I got my demo ready for Steam Next Fest - It's Steam Next Fest, and I've got a game in the showcase. So here are 7 tips for making the most of this demo sharing festival.Game Maker's ToolkitOptimizing lighting in Projekt Z: Beyond Order - 314 Arts studio lead and founder Justin Miersch discuss how the team used the Screen Space Global Illumination feature in Unity’s High Definition Render Pipeline, along with the Unity Profiler and Timeline to overcome the lighting challenges they faced in building Projekt Z: Beyond Order.UnityMemory Arenas in Unity: Heap Allocation Without the GC - In this video, we explore how to build a custom memory arena in Unity using unsafe code and manual heap allocation. You’ll learn how to allocate raw memory for temporary graph-like structures—such as crafting trees or decision planners—without triggering the garbage collector. We’ll walk through the concept of stack frames, translate that to heap-based arena allocation, and implement a fast, disposable system that gives you full control over memory layout and lifetime. Perfect for performance-critical systems where GC spikes aren’t acceptable.git-amendCloth Animation Using The Compute Shader - In this video, we dive into cloth simulation using OpenGL compute shaders. By applying simple mathematical equations, we’ll achieve smooth, dynamic movement. We'll explore particle-based simulation, tackle synchronization challenges with double buffering, and optimize rendering using triangle strips for efficient memory usage. Whether you're familiar with compute shaders or just getting started, this is the perfect way to step up your real-time graphics skills!OGLDEVHow we're designing games for a broader audience - Our games are too hardBiteMe GamesAssetsLearn Game Dev - Unity, Godot, Unreal, Gamemaker, Blender & C# - Make games like a pro.Passionate about video games? Then start making your own! Our latest bundle will help you learn vital game development skills. Master the most popular creation platforms like Unity, Godot, Unreal, GameMaker, Blender, and C#—now that’s a sharp-lookin’ bundle! Build a 2.5D farming RPG with Unreal Engine, create a micro turn-based RPG in Godot, explore game optimization, and so much more.__Big Bang Unreal & Unity Asset Packs Bundle - 5000+ unrivaled assets in one bundle. Calling all game devs—build your worlds with this gigantic bundle of over 5000 assets, including realistic and stylized environments, SFX packs, and powerful tools. Perfect for hobbyists, beginners, and professional developers alike, you'll gain access to essential resources, tutorials, and beta-testing–ready content to start building immediately. The experts at Leartes Studios have curated an amazing library packed with value, featuring environments, VFX packs, and tutorial courses on Unreal Engine, Blender, Substance Painter, and ZBrush. Get the assets you need to bring your game to life—and help support One Tree Planted with your purchase! This bundle provides Unity Asset Store keys directly with your purchase, and FAB keys via redemption through Cosmos, if the product is available on those platforms.Humble Bundle AffiliateGameplay Tools 50% Off - Core systems, half the price. Get pro-grade tools to power your gameplay—combat, cutscenes, UI, and more. Including: HTrace: World Space Global Illumination, VFX Graph - Ultra Mega Pack - Vol.1, Magic Animation Blend, Utility Intelligence: Utility AI Framework for Unity 6, Build for iOS/macOS on Windows>?Unity AffiliateHi guys, I created a website about 6 years in which I host all my field recordings and foley sounds. All free to download and use CC0. There is currently 50+ packs with 1000's of sounds and hours of field recordings all perfect for game SFX and UI. - I think game designers can benefit from a wide range of sounds on the site, especially those that enhance immersion and atmosphere.signaturesounds.orgSmartAddresser - Automate Addressing, Labeling, and Version Control for Unity's Addressable Asset System.CyberAgentGameEntertainment Open SourceEasyCS - EasyCS is an easy-to-use and flexible framework for Unity, adopting a Data-Driven Entity & Actor-Component approach. It bridges Unity's classic OOP with powerful data-oriented patterns, without forcing a complete ECS paradigm shift or a mindset change. Build smarter, not harder.Watcher3056 Open SourceBinding-Of-Isaac_Map-Generator - Binding of Isaac map generator for Unity2DGarnetKane99 Open SourceHelion - A modern fast paced Doom FPS engineHelion-Engine Open SourcePixelationFx - Pixelation post effect for Unity UrpNullTale Open SourceExtreme Add-Ons Bundle For Blender & ZBrush - Extraordinary quality—Extreme add-ons Get quality add-ons for Blender and ZBrush with our latest bundle! We’ve teamed up with the pros at FlippedNormals to deliver a gigantic library of powerful tools for your next game development project. Add new life to your creative work with standout assets like Real-time Hair ZBrush Plugin, Physical Starlight and Atmosphere, Easy Mesh ZBrush Plugin, and more. Get the add-ons you need to bring color and individuality to your next project—and help support Extra Life with your purchase!Humble Bundle AffiliateShop up to 50% off Gabriel Aguiar Prod - Publisher Sale - Gabriel Aguiar Prod. is best known for his extensive VFX assets that help many developers prototype and ship games with special effects. His support and educational material are also invaluable resources for the game dev community. PLUS get VFX Graph - Stylized Fire - Vol. 1 for FREE with code GAP2025Unity AffiliateSpotlightDream Garden - Dream Garden is a simulation game about building tiny cute garden dioramas. A large selection of tools, plants, decorations and customization awaits you. Try all of them and create your dream garden.Campfire StudioMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article. #game #dev #digest #issue #design
    GAMEDEV.NET
    Game Dev Digest Issue #286 - Design Tricks, Deep Dives, and more
    This article was originally published on GameDevDigest.comEnjoy!What was Radiant AI, anyway? - A ridiculously deep dive into Oblivion's controversial AI system and its legacyblog.paavo.meConsider The Horse Game - No I don’t think every dev should make a horse game (unlike horror, which I still think everyone should at least one). But I do think every developer should at least look at them, maybe even play one because, it is very important that you understand the importance of genre, fandom, and how visibility works. Even if you are not making a horse game, the lessons you can learn by looking at this sub genre are very similar to other genres, just not as blatantly clear as they are with horse games.howtomarketagame.comMaking a killing: The playful 2D terror of Psycasso® - I sat down with lead developer Benjamin Lavender and Omni, designer and producer, to talk about this playfully gory game that gives a classic retro style and a fresh (if gruesome) twist.UnityIntroduction to Asset Manager transfer methods in Unity - Unity's Asset Manager is a user-friendly digital asset management platform supporting over 70 file formats to help teams centralize, organize, discover, and use assets seamlessly across projects. It reduces redundant work by design, making cross-team collaboration smoother and accelerating production workflows.UnityVideosRules of the Game: Five Tricks of Highly Effective Designers - Every working designer has them: unique techniques or "tricks" that they use when crafting gameplay. Sure, there's the general game design wisdom that everyone agrees on and can be found in many a game design book, but experienced game designers often have very specific rules that are personal to them, techniques that not everyone knows about or even agrees with. In this GDC 2015 session, five experienced game designers join the stage for 10 minutes each to share one game design "trick" that they use.Game Developers ConferenceBinding of Isaac Style Room Generator in Unity [Full Tutorial] - Our third part in the series - making the rooms!Game Dev GarnetIntroduction to Unity Behavior | Unity Tutorial - In this video you'll become familiar with the core concepts of Unity Behavior, including a live example.LlamAcademyHow I got my demo ready for Steam Next Fest - It's Steam Next Fest, and I've got a game in the showcase. So here are 7 tips for making the most of this demo sharing festival.Game Maker's ToolkitOptimizing lighting in Projekt Z: Beyond Order - 314 Arts studio lead and founder Justin Miersch discuss how the team used the Screen Space Global Illumination feature in Unity’s High Definition Render Pipeline (HDRP), along with the Unity Profiler and Timeline to overcome the lighting challenges they faced in building Projekt Z: Beyond Order.UnityMemory Arenas in Unity: Heap Allocation Without the GC - In this video, we explore how to build a custom memory arena in Unity using unsafe code and manual heap allocation. You’ll learn how to allocate raw memory for temporary graph-like structures—such as crafting trees or decision planners—without triggering the garbage collector. We’ll walk through the concept of stack frames, translate that to heap-based arena allocation, and implement a fast, disposable system that gives you full control over memory layout and lifetime. Perfect for performance-critical systems where GC spikes aren’t acceptable.git-amendCloth Animation Using The Compute Shader - In this video, we dive into cloth simulation using OpenGL compute shaders. By applying simple mathematical equations, we’ll achieve smooth, dynamic movement. We'll explore particle-based simulation, tackle synchronization challenges with double buffering, and optimize rendering using triangle strips for efficient memory usage. Whether you're familiar with compute shaders or just getting started, this is the perfect way to step up your real-time graphics skills!OGLDEVHow we're designing games for a broader audience - Our games are too hardBiteMe GamesAssetsLearn Game Dev - Unity, Godot, Unreal, Gamemaker, Blender & C# - Make games like a pro.Passionate about video games? Then start making your own! Our latest bundle will help you learn vital game development skills. Master the most popular creation platforms like Unity, Godot, Unreal, GameMaker, Blender, and C#—now that’s a sharp-lookin’ bundle! Build a 2.5D farming RPG with Unreal Engine, create a micro turn-based RPG in Godot, explore game optimization, and so much more.__Big Bang Unreal & Unity Asset Packs Bundle - 5000+ unrivaled assets in one bundle. Calling all game devs—build your worlds with this gigantic bundle of over 5000 assets, including realistic and stylized environments, SFX packs, and powerful tools. Perfect for hobbyists, beginners, and professional developers alike, you'll gain access to essential resources, tutorials, and beta-testing–ready content to start building immediately. The experts at Leartes Studios have curated an amazing library packed with value, featuring environments, VFX packs, and tutorial courses on Unreal Engine, Blender, Substance Painter, and ZBrush. Get the assets you need to bring your game to life—and help support One Tree Planted with your purchase! This bundle provides Unity Asset Store keys directly with your purchase, and FAB keys via redemption through Cosmos, if the product is available on those platforms.Humble Bundle AffiliateGameplay Tools 50% Off - Core systems, half the price. Get pro-grade tools to power your gameplay—combat, cutscenes, UI, and more. Including: HTrace: World Space Global Illumination, VFX Graph - Ultra Mega Pack - Vol.1, Magic Animation Blend, Utility Intelligence (v2): Utility AI Framework for Unity 6, Build for iOS/macOS on Windows>?Unity AffiliateHi guys, I created a website about 6 years in which I host all my field recordings and foley sounds. All free to download and use CC0. There is currently 50+ packs with 1000's of sounds and hours of field recordings all perfect for game SFX and UI. - I think game designers can benefit from a wide range of sounds on the site, especially those that enhance immersion and atmosphere.signaturesounds.orgSmartAddresser - Automate Addressing, Labeling, and Version Control for Unity's Addressable Asset System.CyberAgentGameEntertainment Open SourceEasyCS - EasyCS is an easy-to-use and flexible framework for Unity, adopting a Data-Driven Entity & Actor-Component approach. It bridges Unity's classic OOP with powerful data-oriented patterns, without forcing a complete ECS paradigm shift or a mindset change. Build smarter, not harder.Watcher3056 Open SourceBinding-Of-Isaac_Map-Generator - Binding of Isaac map generator for Unity2DGarnetKane99 Open SourceHelion - A modern fast paced Doom FPS engineHelion-Engine Open SourcePixelationFx - Pixelation post effect for Unity UrpNullTale Open SourceExtreme Add-Ons Bundle For Blender & ZBrush - Extraordinary quality—Extreme add-ons Get quality add-ons for Blender and ZBrush with our latest bundle! We’ve teamed up with the pros at FlippedNormals to deliver a gigantic library of powerful tools for your next game development project. Add new life to your creative work with standout assets like Real-time Hair ZBrush Plugin, Physical Starlight and Atmosphere, Easy Mesh ZBrush Plugin, and more. Get the add-ons you need to bring color and individuality to your next project—and help support Extra Life with your purchase!Humble Bundle AffiliateShop up to 50% off Gabriel Aguiar Prod - Publisher Sale - Gabriel Aguiar Prod. is best known for his extensive VFX assets that help many developers prototype and ship games with special effects. His support and educational material are also invaluable resources for the game dev community. PLUS get VFX Graph - Stylized Fire - Vol. 1 for FREE with code GAP2025Unity AffiliateSpotlightDream Garden - Dream Garden is a simulation game about building tiny cute garden dioramas. A large selection of tools, plants, decorations and customization awaits you. Try all of them and create your dream garden.[You can find it on Steam]Campfire StudioMy game, Call Of Dookie. Demo available on SteamYou can subscribe to the free weekly newsletter on GameDevDigest.comThis post includes affiliate links; I may receive compensation if you purchase products or services from the different links provided in this article.
    0 Yorumlar 0 hisse senetleri 0 önizleme
  • MedTech AI, hardware, and clinical application programmes

    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between billion and billion annually in productivity gains. Through GenAI adoption, an additional billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experiencebeing equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    #medtech #hardware #clinical #application #programmes
    MedTech AI, hardware, and clinical application programmes
    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between billion and billion annually in productivity gains. Through GenAI adoption, an additional billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experiencebeing equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here. #medtech #hardware #clinical #application #programmes
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    MedTech AI, hardware, and clinical application programmes
    Modern healthcare innovations span AI, devices, software, images, and regulatory frameworks, all requiring stringent coordination. Generative AI arguably has the strongest transformative potential in healthcare technology programmes, with it already being applied across various domains, such as R&D, commercial operations, and supply chain management.Traditional models for medical appointments, like face-to-face appointments, and paper-based processes may not be sufficient to meet the fast-paced, data-driven medical landscape of today. Therefore, healthcare professionals and patients are seeking more convenient and efficient ways to access and share information, meeting the complex standards of modern medical science. According to McKinsey, Medtech companies are at the forefront of healthcare innovation, estimating they could capture between $14 billion and $55 billion annually in productivity gains. Through GenAI adoption, an additional $50 billion plus in revenue is estimated from products and service innovations. A McKinsey 2024 survey revealed around two thirds of Medtech executives have already implemented Gen AI, with approximately 20% scaling their solutions up and reporting substantial benefits to productivity.  While advanced technology implementation is growing across the medical industry, challenges persist. Organisations face hurdles like data integration issues, decentralised strategies, and skill gaps. Together, these highlight a need for a more streamlined approach to Gen AI deployment. Of all the Medtech domains, R&D is leading the way in Gen AI adoption. Being the most comfortable with new technologies, R&D departments use Gen AI tools to streamline work processes, such as summarising research papers or scientific articles, highlighting a grassroots adoption trend. Individual researchers are using AI to enhance productivity, even when no formal company-wide strategies are in place.While AI tools automate and accelerate R&D tasks, human review is still required to ensure final submissions are correct and satisfactory. Gen AI is proving to reduce time spent on administrative tasks for teams and improve research accuracy and depth, with some companies experiencing 20% to 30% gains in research productivity. KPIs for success in healthcare product programmesMeasuring business performance is essential in the healthcare sector. The number one goal is, of course, to deliver high-quality care, yet simultaneously maintain efficient operations. By measuring and analysing KPIs, healthcare providers are in a better position to improve patient outcomes through their data-based considerations. KPIs can also improve resource allocation, and encourage continuous improvement in all areas of care. In terms of healthcare product programmes, these structured initiatives prioritise the development, delivery, and continual optimisation of medical products. But to be a success, they require cross-functional coordination of clinical, technical, regulatory, and business teams. Time to market is critical, ensuring a product moves from the concept stage to launch as quickly as possible.Of particular note is the emphasis needing to be placed on labelling and documentation. McKinsey notes that AI-assisted labelling has resulted in a 20%-30% improvement in operational efficiency. Resource utilisation rates are also important, showing how efficiently time, budget, and/or headcount are used during the developmental stage of products. In the healthcare sector, KPIs ought to focus on several factors, including operational efficiency, patient outcomes, financial health of the business, and patient satisfaction. To achieve a comprehensive view of performance, these can be categorised into financial, operational, clinical quality, and patient experience.Bridging user experience with technical precision – design awardsInnovation is no longer solely judged by technical performance with user experience (UX) being equally important. Some of the latest innovations in healthcare are recognised at the UX Design Awards, products that exemplify the best in user experience as well as technical precision. Top products prioritise the needs and experiences of both patients and healthcare professionals, also ensuring each product meets the rigorous clinical and regulatory standards of the sector. One example is the CIARTIC Move by Siemens Healthineers, a self-driving 3D C-arm imaging system that lets surgeons operate, controlling the device wirelessly in a sterile field. Computer hardware company ASUS has also received accolades for its HealthConnect App and VivoWatch Series, showcasing the fusion of AIoT-driven smart healthcare solutions with user-friendly interfaces – sometimes in what are essentially consumer devices. This demonstrates how technical innovation is being made accessible and becoming increasingly intuitive as patients gain technical fluency.  Navigating regulatory and product development pathways simultaneously The establishing of clinical and regulatory paths is important, as this enables healthcare teams to feed a twin stream of findings back into development. Gen AI adoption has become a transformative approach, automating the production and refining of complex documents, mixed data sets, and structured and unstructured data. By integrating regulatory considerations early and adopting technologies like Gen AI as part of agile practices, healthcare product programmes help teams navigate a regulatory landscape that can often shift. Baking a regulatory mindset into a team early helps ensure compliance and continued innovation. (Image source: “IBM Achieves New Deep Learning Breakthrough” by IBM Research is licensed under CC BY-ND 2.0.)Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.Explore other upcoming enterprise technology events and webinars powered by TechForge here.
    0 Yorumlar 0 hisse senetleri 0 önizleme
CGShares https://cgshares.com