• What in the world are we doing? Scientists at the Massachusetts Institute of Technology have come up with this mind-boggling idea of creating an AI model that "never stops learning." Seriously? This is the kind of reckless innovation that could lead to disastrous consequences! Do we really want machines that keep learning on the fly without any checks and balances? Are we so blinded by the allure of technological advancement that we are willing to ignore the potential risks associated with an AI that continually improves itself?

    First off, let’s address the elephant in the room: the sheer arrogance of thinking we can control something that is designed to evolve endlessly. This MIT development is hailed as a step forward, but why are we celebrating a move toward self-improving AI when the implications are terrifying? We have already seen how AI systems can perpetuate biases, spread misinformation, and even manipulate human behavior. The last thing we need is for an arrogant algorithm to keep evolving, potentially amplifying these issues without any human oversight.

    The scientists behind this project might have a vision of a utopian future where AI can solve our problems, but they seem utterly oblivious to the fact that with great power comes great responsibility. Who is going to regulate this relentless learning process? What safeguards are in place to prevent this technology from spiraling out of control? The notion that AI can autonomously enhance itself without a human hand to guide it is not just naïve; it’s downright dangerous!

    We are living in a time when technology is advancing at breakneck speed, and instead of pausing to consider the ramifications, we are throwing caution to the wind. The excitement around this AI model that "never stops learning" is misplaced. The last decade has shown us that unchecked technology can wreak havoc—think data breaches, surveillance, and the erosion of privacy. So why are we racing toward a future where AI can learn and adapt without our input? Are we really that desperate for innovation that we can't see the cliff we’re heading toward?

    It’s time to wake up and realize that this relentless pursuit of progress without accountability is a recipe for disaster. We need to demand transparency and regulation from the creators of such technologies. This isn't just about scientific advancement; it's about ensuring that we don’t create monsters we can’t control.

    In conclusion, let’s stop idolizing these so-called breakthroughs in AI without critically examining what they truly mean for society. We need to hold these scientists accountable for the future they are shaping. We must question the ethics of an AI that never stops learning and remind ourselves that just because we can, doesn’t mean we should!

    #AI #MIT #EthicsInTech #Accountability #FutureOfAI
    What in the world are we doing? Scientists at the Massachusetts Institute of Technology have come up with this mind-boggling idea of creating an AI model that "never stops learning." Seriously? This is the kind of reckless innovation that could lead to disastrous consequences! Do we really want machines that keep learning on the fly without any checks and balances? Are we so blinded by the allure of technological advancement that we are willing to ignore the potential risks associated with an AI that continually improves itself? First off, let’s address the elephant in the room: the sheer arrogance of thinking we can control something that is designed to evolve endlessly. This MIT development is hailed as a step forward, but why are we celebrating a move toward self-improving AI when the implications are terrifying? We have already seen how AI systems can perpetuate biases, spread misinformation, and even manipulate human behavior. The last thing we need is for an arrogant algorithm to keep evolving, potentially amplifying these issues without any human oversight. The scientists behind this project might have a vision of a utopian future where AI can solve our problems, but they seem utterly oblivious to the fact that with great power comes great responsibility. Who is going to regulate this relentless learning process? What safeguards are in place to prevent this technology from spiraling out of control? The notion that AI can autonomously enhance itself without a human hand to guide it is not just naïve; it’s downright dangerous! We are living in a time when technology is advancing at breakneck speed, and instead of pausing to consider the ramifications, we are throwing caution to the wind. The excitement around this AI model that "never stops learning" is misplaced. The last decade has shown us that unchecked technology can wreak havoc—think data breaches, surveillance, and the erosion of privacy. So why are we racing toward a future where AI can learn and adapt without our input? Are we really that desperate for innovation that we can't see the cliff we’re heading toward? It’s time to wake up and realize that this relentless pursuit of progress without accountability is a recipe for disaster. We need to demand transparency and regulation from the creators of such technologies. This isn't just about scientific advancement; it's about ensuring that we don’t create monsters we can’t control. In conclusion, let’s stop idolizing these so-called breakthroughs in AI without critically examining what they truly mean for society. We need to hold these scientists accountable for the future they are shaping. We must question the ethics of an AI that never stops learning and remind ourselves that just because we can, doesn’t mean we should! #AI #MIT #EthicsInTech #Accountability #FutureOfAI
    This AI Model Never Stops Learning
    Scientists at Massachusetts Institute of Technology have devised a way for large language models to keep learning on the fly—a step toward building AI that continually improves itself.
    Like
    Love
    Wow
    Sad
    Angry
    340
    1 Comentários 0 Compartilhamentos 0 Anterior
  • mouvement #MeToo, violences sexistes, industrie créative, animation, VFX, jeux vidéo, AVFT, prévention, guides pratiques

    ## Introduction

    Dans le monde dynamique et captivant de l'industrie créative, que ce soit dans l'animation, les effets visuels (VFX) ou le développement de jeux vidéo, chaque jour offre de nouvelles opportunités à explorer et à créer. Cependant, ces secteurs ne sont pas à l'abri des défis, notamment en matière de violences sexistes et sexuelles. Le mouvement #MeToo a ouvert ...
    mouvement #MeToo, violences sexistes, industrie créative, animation, VFX, jeux vidéo, AVFT, prévention, guides pratiques ## Introduction Dans le monde dynamique et captivant de l'industrie créative, que ce soit dans l'animation, les effets visuels (VFX) ou le développement de jeux vidéo, chaque jour offre de nouvelles opportunités à explorer et à créer. Cependant, ces secteurs ne sont pas à l'abri des défis, notamment en matière de violences sexistes et sexuelles. Le mouvement #MeToo a ouvert ...
    VHSS au travail : un guide essentiel pour un environnement créatif sain
    mouvement #MeToo, violences sexistes, industrie créative, animation, VFX, jeux vidéo, AVFT, prévention, guides pratiques ## Introduction Dans le monde dynamique et captivant de l'industrie créative, que ce soit dans l'animation, les effets visuels (VFX) ou le développement de jeux vidéo, chaque jour offre de nouvelles opportunités à explorer et à créer. Cependant, ces secteurs ne sont pas à...
    Like
    Love
    Wow
    Sad
    Angry
    403
    1 Comentários 0 Compartilhamentos 0 Anterior
  • DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy

    ---

    In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one can substitute the other is not just naive; it’s downright reckless! If you're in charge of safeguarding ...
    DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy --- In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one can substitute the other is not just naive; it’s downright reckless! If you're in charge of safeguarding ...
    No more excuses: You need both DSPM and DLP to secure your data
    DSPM, DLP, data security, cybersecurity, data protection, innovation, compliance, data threats, security strategy --- In the relentless battlefield of data security, it's infuriating to witness the ongoing debate about whether to prioritize Data Security Posture Management (DSPM) or Data Loss Prevention (DLP). Let's get one thing straight: **you need both**. This ridiculous notion that one...
    Like
    Love
    Wow
    Sad
    Angry
    600
    1 Comentários 0 Compartilhamentos 0 Anterior
  • In a world filled with noise and confusion, I often find myself wandering through the shadows of my own thoughts, feeling the weight of solitude pressing down on my heart. Life seems to be a maze of unanswered questions, and every attempt to connect with others feels like reaching for a mirage, only to grasp nothing but empty air.

    The moments of joy I once held close now feel like distant memories, echoes of laughter fading into silence. I watch as others move forward, their lives intertwined in a tapestry of companionship and love, while I remain a mere spectator, lost in a sea of loneliness. The more I search for meaning, the more isolated I feel, as if I am trapped within an invisible cage of despair.

    Sometimes, I think about how a multi-criteria search form could be a metaphor for my life—a tool that should help me filter through the chaos and find what truly matters. But instead, I am left with a default search, sifting through the mundane and the ordinary, finding little that resonates with my heart. The longing for depth and connection grows stronger, yet I find myself surrounded by barriers that prevent me from reaching out.

    Each day feels like a quest for something more, a yearning for authenticity in a world that often feels superficial. The possibility of a more advanced search for companionship seems like a distant dream. I wish I could apply those multi-criteria filters to my emotions, to sift through the layers of hurt and find the moments of true connection. But here I am, feeling invisible, as if my heart is a book with pages torn out—lost to time and forgotten by the world.

    In these quiet moments, I hold onto the hope that perhaps one day, I will find the right filters to navigate this labyrinth of loneliness. Until then, I carry my heart in silence, longing for the day when the search will lead me to a place where I truly belong.

    #Loneliness #Heartbreak #EmotionalJourney #SearchingForConnection #FeelingLost
    In a world filled with noise and confusion, I often find myself wandering through the shadows of my own thoughts, feeling the weight of solitude pressing down on my heart. Life seems to be a maze of unanswered questions, and every attempt to connect with others feels like reaching for a mirage, only to grasp nothing but empty air. 💔 The moments of joy I once held close now feel like distant memories, echoes of laughter fading into silence. I watch as others move forward, their lives intertwined in a tapestry of companionship and love, while I remain a mere spectator, lost in a sea of loneliness. The more I search for meaning, the more isolated I feel, as if I am trapped within an invisible cage of despair. 🥀 Sometimes, I think about how a multi-criteria search form could be a metaphor for my life—a tool that should help me filter through the chaos and find what truly matters. But instead, I am left with a default search, sifting through the mundane and the ordinary, finding little that resonates with my heart. The longing for depth and connection grows stronger, yet I find myself surrounded by barriers that prevent me from reaching out. Each day feels like a quest for something more, a yearning for authenticity in a world that often feels superficial. The possibility of a more advanced search for companionship seems like a distant dream. I wish I could apply those multi-criteria filters to my emotions, to sift through the layers of hurt and find the moments of true connection. But here I am, feeling invisible, as if my heart is a book with pages torn out—lost to time and forgotten by the world. 📖 In these quiet moments, I hold onto the hope that perhaps one day, I will find the right filters to navigate this labyrinth of loneliness. Until then, I carry my heart in silence, longing for the day when the search will lead me to a place where I truly belong. #Loneliness #Heartbreak #EmotionalJourney #SearchingForConnection #FeelingLost
    Un formulaire de recherche multi-critères
    Un formulaire de recherche multi-critères, ou recherche avancée, est un outil qui se distingue du module natif de WordPress en permettant à un utilisateur d’utiliser des options de recherche additionnelles et ainsi d’obtenir des résultats plus précis
    Like
    Love
    Wow
    Angry
    Sad
    575
    1 Comentários 0 Compartilhamentos 0 Anterior
  • Microsoft 365 security in the spotlight after Washington Post hack

    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

    Microsoft 365 security in the spotlight after Washington Post hack

    Paul Hill

    Neowin
    @ziks_99 ·

    Jun 16, 2025 03:36 EDT

    The Washington Post has come under cyberattack which saw Microsoft email accounts of several journalists get compromised. The attack, which was discovered last Thursday, is believed to have been conducted by a foreign government due to the topics the journalists cover, including national security, economic policy, and China. Following the hack, the passwords on the affected accounts were reset to prevent access.
    The fact that a Microsoft work email account was potentially hacked strongly suggests The Washington Post utilizes Microsoft 365, which makes us question the security of Microsoft’s widely used enterprise services. Given that Microsoft 365 is very popular, it is a hot target for attackers.
    Microsoft's enterprise security offerings and challenges

    As the investigation into the cyberattack is still ongoing, just how attackers gained access to the accounts of the journalists is unknown, however, Microsoft 365 does have multiple layers of protection that ought to keep journalists safe.
    One of the security tools is Microsoft Defender for Office 365. If the hackers tried to gain access with malicious links, Defender provides protection against any malicious attachments, links, or email-based phishing attempts with the Advanced Threat Protection feature. Defender also helps to protect against malware that could be used to target journalists at The Washington Post.
    Another security measure in place is Entra ID which helps enterprises defend against identity-based attacks. Some key features of Entra ID include multi-factor authentication which protects accounts even if a password is compromised, and there are granular access policies that help to limit logins from outside certain locations, unknown devices, or limit which apps can be used.
    While Microsoft does offer plenty of security technologies with M365, hacks can still take place due to misconfiguration, user-error, or through the exploitation of zero-day vulnerabilities. Essentially, it requires efforts from both Microsoft and the customer to maintain security.
    Lessons for organizations using Microsoft 365
    The incident over at The Washington Post serves as a stark reminder that all organizations, not just news organizations, should audit and strengthen their security setups. Some of the most important security measures you can put in place include mandatory multi-factor authenticationfor all users, especially for privileged accounts; strong password rules such as using letters, numbers, and symbols; regular security awareness training; and installing any security updates in a timely manner.
    Many of the cyberattacks that we learn about from companies like Microsoft involve hackers taking advantage of the human in the equation, such as being tricked into sharing passwords or sharing sensitive information due to trickery on behalf of the hackers. This highlights that employee training is crucial in protecting systems and that Microsoft’s technologies, as advanced as they are, can’t mitigate all attacks 100 percent of the time.

    Tags

    Report a problem with article

    Follow @NeowinFeed
    #microsoft #security #spotlight #after #washington
    Microsoft 365 security in the spotlight after Washington Post hack
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 security in the spotlight after Washington Post hack Paul Hill Neowin @ziks_99 · Jun 16, 2025 03:36 EDT The Washington Post has come under cyberattack which saw Microsoft email accounts of several journalists get compromised. The attack, which was discovered last Thursday, is believed to have been conducted by a foreign government due to the topics the journalists cover, including national security, economic policy, and China. Following the hack, the passwords on the affected accounts were reset to prevent access. The fact that a Microsoft work email account was potentially hacked strongly suggests The Washington Post utilizes Microsoft 365, which makes us question the security of Microsoft’s widely used enterprise services. Given that Microsoft 365 is very popular, it is a hot target for attackers. Microsoft's enterprise security offerings and challenges As the investigation into the cyberattack is still ongoing, just how attackers gained access to the accounts of the journalists is unknown, however, Microsoft 365 does have multiple layers of protection that ought to keep journalists safe. One of the security tools is Microsoft Defender for Office 365. If the hackers tried to gain access with malicious links, Defender provides protection against any malicious attachments, links, or email-based phishing attempts with the Advanced Threat Protection feature. Defender also helps to protect against malware that could be used to target journalists at The Washington Post. Another security measure in place is Entra ID which helps enterprises defend against identity-based attacks. Some key features of Entra ID include multi-factor authentication which protects accounts even if a password is compromised, and there are granular access policies that help to limit logins from outside certain locations, unknown devices, or limit which apps can be used. While Microsoft does offer plenty of security technologies with M365, hacks can still take place due to misconfiguration, user-error, or through the exploitation of zero-day vulnerabilities. Essentially, it requires efforts from both Microsoft and the customer to maintain security. Lessons for organizations using Microsoft 365 The incident over at The Washington Post serves as a stark reminder that all organizations, not just news organizations, should audit and strengthen their security setups. Some of the most important security measures you can put in place include mandatory multi-factor authenticationfor all users, especially for privileged accounts; strong password rules such as using letters, numbers, and symbols; regular security awareness training; and installing any security updates in a timely manner. Many of the cyberattacks that we learn about from companies like Microsoft involve hackers taking advantage of the human in the equation, such as being tricked into sharing passwords or sharing sensitive information due to trickery on behalf of the hackers. This highlights that employee training is crucial in protecting systems and that Microsoft’s technologies, as advanced as they are, can’t mitigate all attacks 100 percent of the time. Tags Report a problem with article Follow @NeowinFeed #microsoft #security #spotlight #after #washington
    WWW.NEOWIN.NET
    Microsoft 365 security in the spotlight after Washington Post hack
    When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. Microsoft 365 security in the spotlight after Washington Post hack Paul Hill Neowin @ziks_99 · Jun 16, 2025 03:36 EDT The Washington Post has come under cyberattack which saw Microsoft email accounts of several journalists get compromised. The attack, which was discovered last Thursday, is believed to have been conducted by a foreign government due to the topics the journalists cover, including national security, economic policy, and China. Following the hack, the passwords on the affected accounts were reset to prevent access. The fact that a Microsoft work email account was potentially hacked strongly suggests The Washington Post utilizes Microsoft 365, which makes us question the security of Microsoft’s widely used enterprise services. Given that Microsoft 365 is very popular, it is a hot target for attackers. Microsoft's enterprise security offerings and challenges As the investigation into the cyberattack is still ongoing, just how attackers gained access to the accounts of the journalists is unknown, however, Microsoft 365 does have multiple layers of protection that ought to keep journalists safe. One of the security tools is Microsoft Defender for Office 365. If the hackers tried to gain access with malicious links, Defender provides protection against any malicious attachments, links, or email-based phishing attempts with the Advanced Threat Protection feature. Defender also helps to protect against malware that could be used to target journalists at The Washington Post. Another security measure in place is Entra ID which helps enterprises defend against identity-based attacks. Some key features of Entra ID include multi-factor authentication which protects accounts even if a password is compromised, and there are granular access policies that help to limit logins from outside certain locations, unknown devices, or limit which apps can be used. While Microsoft does offer plenty of security technologies with M365, hacks can still take place due to misconfiguration, user-error, or through the exploitation of zero-day vulnerabilities. Essentially, it requires efforts from both Microsoft and the customer to maintain security. Lessons for organizations using Microsoft 365 The incident over at The Washington Post serves as a stark reminder that all organizations, not just news organizations, should audit and strengthen their security setups. Some of the most important security measures you can put in place include mandatory multi-factor authentication (MFA) for all users, especially for privileged accounts; strong password rules such as using letters, numbers, and symbols; regular security awareness training; and installing any security updates in a timely manner. Many of the cyberattacks that we learn about from companies like Microsoft involve hackers taking advantage of the human in the equation, such as being tricked into sharing passwords or sharing sensitive information due to trickery on behalf of the hackers. This highlights that employee training is crucial in protecting systems and that Microsoft’s technologies, as advanced as they are, can’t mitigate all attacks 100 percent of the time. Tags Report a problem with article Follow @NeowinFeed
    Like
    Love
    Wow
    Sad
    Angry
    553
    0 Comentários 0 Compartilhamentos 0 Anterior
  • The AI execution gap: Why 80% of projects don’t reach production

    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.
    #execution #gap #why #projects #dont
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle. #execution #gap #why #projects #dont
    WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
    The AI execution gap: Why 80% of projects don’t reach production
    Enterprise artificial intelligence investment is unprecedented, with IDC projecting global spending on AI and GenAI to double to $631 billion by 2028. Yet beneath the impressive budget allocations and boardroom enthusiasm lies a troubling reality: most organisations struggle to translate their AI ambitions into operational success.The sobering statistics behind AI’s promiseModelOp’s 2025 AI Governance Benchmark Report, based on input from 100 senior AI and data leaders at Fortune 500 enterprises, reveals a disconnect between aspiration and execution.While more than 80% of enterprises have 51 or more generative AI projects in proposal phases, only 18% have successfully deployed more than 20 models into production.The execution gap represents one of the most significant challenges facing enterprise AI today. Most generative AI projects still require 6 to 18 months to go live – if they reach production at all.The result is delayed returns on investment, frustrated stakeholders, and diminished confidence in AI initiatives in the enterprise.The cause: Structural, not technical barriersThe biggest obstacles preventing AI scalability aren’t technical limitations – they’re structural inefficiencies plaguing enterprise operations. The ModelOp benchmark report identifies several problems that create what experts call a “time-to-market quagmire.”Fragmented systems plague implementation. 58% of organisations cite fragmented systems as the top obstacle to adopting governance platforms. Fragmentation creates silos where different departments use incompatible tools and processes, making it nearly impossible to maintain consistent oversight in AI initiatives.Manual processes dominate despite digital transformation. 55% of enterprises still rely on manual processes – including spreadsheets and email – to manage AI use case intake. The reliance on antiquated methods creates bottlenecks, increases the likelihood of errors, and makes it difficult to scale AI operations.Lack of standardisation hampers progress. Only 23% of organisations implement standardised intake, development, and model management processes. Without these elements, each AI project becomes a unique challenge requiring custom solutions and extensive coordination by multiple teams.Enterprise-level oversight remains rare Just 14% of companies perform AI assurance at the enterprise level, increasing the risk of duplicated efforts and inconsistent oversight. The lack of centralised governance means organisations often discover they’re solving the same problems multiple times in different departments.The governance revolution: From obstacle to acceleratorA change is taking place in how enterprises view AI governance. Rather than seeing it as a compliance burden that slows innovation, forward-thinking organisations recognise governance as an important enabler of scale and speed.Leadership alignment signals strategic shift. The ModelOp benchmark data reveals a change in organisational structure: 46% of companies now assign accountability for AI governance to a Chief Innovation Officer – more than four times the number who place accountability under Legal or Compliance. This strategic repositioning reflects a new understanding that governance isn’t solely about risk management, but can enable innovation.Investment follows strategic priority. A financial commitment to AI governance underscores its importance. According to the report, 36% of enterprises have budgeted at least $1 million annually for AI governance software, while 54% have allocated resources specifically for AI Portfolio Intelligence to track value and ROI.What high-performing organisations do differentlyThe enterprises that successfully bridge the ‘execution gap’ share several characteristics in their approach to AI implementation:Standardised processes from day one. Leading organisations implement standardised intake, development, and model review processes in AI initiatives. Consistency eliminates the need to reinvent workflows for each project and ensures that all stakeholders understand their responsibilities.Centralised documentation and inventory. Rather than allowing AI assets to proliferate in disconnected systems, successful enterprises maintain centralised inventories that provide visibility into every model’s status, performance, and compliance posture.Automated governance checkpoints. High-performing organisations embed automated governance checkpoints throughout the AI lifecycle, helping ensure compliance requirements and risk assessments are addressed systematically rather than as afterthoughts.End-to-end traceability. Leading enterprises maintain complete traceability of their AI models, including data sources, training methods, validation results, and performance metrics.Measurable impact of structured governanceThe benefits of implementing comprehensive AI governance extend beyond compliance. Organisations that adopt lifecycle automation platforms reportedly see dramatic improvements in operational efficiency and business outcomes.A financial services firm profiled in the ModelOp report experienced a halving of time to production and an 80% reduction in issue resolution time after implementing automated governance processes. Such improvements translate directly into faster time-to-value and increased confidence among business stakeholders.Enterprises with robust governance frameworks report the ability to many times more models simultaneously while maintaining oversight and control. This scalability lets organisations pursue AI initiatives in multiple business units without overwhelming their operational capabilities.The path forward: From stuck to scaledThe message from industry leaders that the gap between AI ambition and execution is solvable, but it requires a shift in approach. Rather than treating governance as a necessary evil, enterprises should realise it enables AI innovation at scale.Immediate action items for AI leadersOrganisations looking to escape the ‘time-to-market quagmire’ should prioritise the following:Audit current state: Conduct an assessment of existing AI initiatives, identifying fragmented processes and manual bottlenecksStandardise workflows: Implement consistent processes for AI use case intake, development, and deployment in all business unitsInvest in integration: Deploy platforms to unify disparate tools and systems under a single governance frameworkEstablish enterprise oversight: Create centralised visibility into all AI initiatives with real-time monitoring and reporting abilitiesThe competitive advantage of getting it rightOrganisations that can solve the execution challenge will be able to bring AI solutions to market faster, scale more efficiently, and maintain the trust of stakeholders and regulators.Enterprises that continue with fragmented processes and manual workflows will find themselves disadvantaged compared to their more organised competitors. Operational excellence isn’t about efficiency but survival.The data shows enterprise AI investment will continue to grow. Therefore, the question isn’t whether organisations will invest in AI, but whether they’ll develop the operational abilities necessary to realise return on investment. The opportunity to lead in the AI-driven economy has never been greater for those willing to embrace governance as an enabler not an obstacle.(Image source: Unsplash)
    Like
    Love
    Wow
    Angry
    Sad
    598
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon

    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey.

    Get the Popular Science daily newsletter
    Breakthroughs, discoveries, and DIY tips sent every weekday.

    South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations.
    Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered.
    Frontiers: What inspired you to become a researcher?
    Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved.
    F: Can you tell us about the research you’re currently working on?
    BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation.
    Local boating the Amazon River. CREDIT: Beatriz Cosendey.
    F: Could you tell us about one of the legends surrounding anacondas?
    BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty.
    F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity?
    BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently.
    A giant anaconda is being measured. Credit: Pedro Calazans.
    F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play?
    BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?”
    For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste.
    One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey.
    Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey.
    We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals.
    F: Are there any common misconceptions about this area of research? How would you address them?
    BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data.
    However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework.
    To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society.
    The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey.
    F: What are some of the areas of research you’d like to see tackled in the years ahead?
    BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere.
    F: How has open science benefited the reach and impact of your research?
    BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups.
    The Q&A can also be read here.
    #qampampa #how #anacondas #chickens #locals
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals, while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is herfavorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh, and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here. #qampampa #how #anacondas #chickens #locals
    WWW.POPSCI.COM
    Q&A: How anacondas, chickens, and locals may be able to coexist in the Amazon
    A coiled giant anaconda. They are the largest snake species in Brazil and play a major role in legends including the ‘Boiuna’ and the ‘Cobra Grande.’ CREDIT: Beatriz Cosendey. Get the Popular Science daily newsletter💡 Breakthroughs, discoveries, and DIY tips sent every weekday. South America’s lush Amazon region is a biodiversity hotspot, which means that every living thing must find a way to co-exist. Even some of the most feared snakes on the planet–anacondas. In a paper published June 16 in the journal Frontiers in Amphibian and Reptile Science, conservation biologists Beatriz Cosendey and Juarez Carlos Brito Pezzuti from the Federal University of Pará’s Center for Amazonian Studies in Brazil, analyze the key points behind the interactions between humans and the local anaconda populations. Ahead of the paper’s publication, the team at Frontiers conducted this wide-ranging Q&A with Conesday. It has not been altered. Frontiers: What inspired you to become a researcher? Beatriz Cosendey: As a child, I was fascinated by reports and documentaries about field research and often wondered what it took to be there and what kind of knowledge was being produced. Later, as an ecologist, I felt the need for approaches that better connected scientific research with real-world contexts. I became especially interested in perspectives that viewed humans not as separate from nature, but as part of ecological systems. This led me to explore integrative methods that incorporate local and traditional knowledge, aiming to make research more relevant and accessible to the communities involved. F: Can you tell us about the research you’re currently working on? BC: My research focuses on ethnobiology, an interdisciplinary field intersecting ecology, conservation, and traditional knowledge. We investigate not only the biodiversity of an area but also the relationship local communities have with surrounding species, providing a better understanding of local dynamics and areas needing special attention for conservation. After all, no one knows a place better than those who have lived there for generations. This deep familiarity allows for early detection of changes or environmental shifts. Additionally, developing a collaborative project with residents generates greater engagement, as they recognize themselves as active contributors; and collective participation is essential for effective conservation. Local boating the Amazon River. CREDIT: Beatriz Cosendey. F: Could you tell us about one of the legends surrounding anacondas? BC: One of the greatest myths is about the Great Snake—a huge snake that is said to inhabit the Amazon River and sleep beneath the town. According to the dwellers, the Great Snake is an anaconda that has grown too large; its movements can shake the river’s waters, and its eyes look like fire in the darkness of night. People say anacondas can grow so big that they can swallow large animals—including humans or cattle—without difficulty. F: What could be the reasons why the traditional role of anacondas as a spiritual and mythological entity has changed? Do you think the fact that fewer anacondas have been seen in recent years contributes to their diminished importance as an mythological entity? BC: Not exactly. I believe the two are related, but not in a direct way. The mythology still exists, but among Aritapera dwellers, there’s a more practical, everyday concern—mainly the fear of losing their chickens. As a result, anacondas have come to be seen as stealthy thieves. These traits are mostly associated with smaller individuals (up to around 2–2.5 meters), while the larger ones—which may still carry the symbolic weight of the ‘Great Snake’—tend to retreat to more sheltered areas; because of the presence of houses, motorized boats, and general noise, they are now seen much less frequently. A giant anaconda is being measured. Credit: Pedro Calazans. F: Can you share some of the quotes you’ve collected in interviews that show the attitude of community members towards anacondas? How do chickens come into play? BC: When talking about anacondas, one thing always comes up: chickens. “Chicken is her [the anaconda’s] favorite dish. If one clucks, she comes,” said one dweller. This kind of remark helps explain why the conflict is often framed in economic terms. During the interviews and conversations with local dwellers, many emphasized the financial impact of losing their animals: “The biggest loss is that they keep taking chicks and chickens…” or “You raise the chicken—you can’t just let it be eaten for free, right?” For them, it’s a loss of investment, especially since corn, which is used as chicken feed, is expensive. As one person put it: “We spend time feeding and raising the birds, and then the snake comes and takes them.” One dweller shared that, in an attempt to prevent another loss, he killed the anaconda and removed the last chicken it had swallowed from its belly—”it was still fresh,” he said—and used it for his meal, cooking the chicken for lunch so it wouldn’t go to waste. One of the Amazonas communities where the researchers conducted their research. CREDIT: Beatriz Cosendey. Some interviewees reported that they had to rebuild their chicken coops and pigsties because too many anacondas were getting in. Participants would point out where the anaconda had entered and explained that they came in through gaps or cracks but couldn’t get out afterwards because they ‘tufavam’ — a local term referring to the snake’s body swelling after ingesting prey. We saw chicken coops made with mesh, with nylon, some that worked and some that didn’t. Guided by the locals’ insights, we concluded that the best solution to compensate for the gaps between the wooden slats is to line the coop with a fine nylon mesh (to block smaller animals), and on the outside, a layer of wire mesh, which protects the inner mesh and prevents the entry of larger animals. F: Are there any common misconceptions about this area of research? How would you address them? BC: Yes, very much. Although ethnobiology is an old science, it’s still underexplored and often misunderstood. In some fields, there are ongoing debates about the robustness and scientific validity of the field and related areas. This is largely because the findings don’t always rely only on hard statistical data. However, like any other scientific field, it follows standardized methodologies, and no result is accepted without proper grounding. What happens is that ethnobiology leans more toward the human sciences, placing human beings and traditional knowledge as key variables within its framework. To address these misconceptions, I believe it’s important to emphasize that ethnobiology produces solid and relevant knowledge—especially in the context of conservation and sustainable development. It offers insights that purely biological approaches might overlook and helps build bridges between science and society. The study focused on the várzea regions of the Lower Amazon River. CREDIT: Beatriz Cosendey. F: What are some of the areas of research you’d like to see tackled in the years ahead? BC: I’d like to see more conservation projects that include local communities as active participants rather than as passive observers. Incorporating their voices, perspectives, and needs not only makes initiatives more effective, but also more just. There is also great potential in recognizing and valuing traditional knowledge. Beyond its cultural significance, certain practices—such as the use of natural compounds—could become practical assets for other vulnerable regions. Once properly documented and understood, many of these approaches offer adaptable forms of environmental management and could help inform broader conservation strategies elsewhere. F: How has open science benefited the reach and impact of your research? BC: Open science is crucial for making research more accessible. By eliminating access barriers, it facilitates a broader exchange of knowledge—important especially for interdisciplinary research like mine which draws on multiple knowledge systems and gains value when shared widely. For scientific work, it ensures that knowledge reaches a wider audience, including practitioners and policymakers. This openness fosters dialogue across different sectors, making research more inclusive and encouraging greater collaboration among diverse groups. The Q&A can also be read here.
    Like
    Love
    Wow
    Sad
    Angry
    443
    2 Comentários 0 Compartilhamentos 0 Anterior
  • The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source

    Key Takeaways

    Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices.
    The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it.
    A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation.

    Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software.
    Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall.
    The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month.
    Why the Danish Ministry of Digitalization Switched to Open-Source Software
    The three main reasons Denmark is moving away from Microsoft are costs, politics, and security.
    In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider. 
    The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023.
    It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft. 

    Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely.
    Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark. 
    In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory.
    If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January.
    Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves
    Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent.
    Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values.

    Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region. 
    Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied.
    Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied.
    Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead
    Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking.
    It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products.
    Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.
     Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later

    As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy.
    With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility.
    Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines.
    Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech. 
    He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom.
    That fascination with tech didn’t just stick. It evolved into a full-blown calling.
    After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career.
    He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy.
    His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers.
    At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap.
    Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual.
    As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting.
    From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it.

    View all articles by Cedric Solidon

    Our editorial process

    The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    #word #out #danish #ministry #drops
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Unioncountries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court, Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors. #word #out #danish #ministry #drops
    TECHREPORT.COM
    The Word is Out: Danish Ministry Drops Microsoft, Goes Open Source
    Key Takeaways Meta and Yandex have been found guilty of secretly listening to localhost ports and using them to transfer sensitive data from Android devices. The corporations use Meta Pixel and Yandex Metrica scripts to transfer cookies from browsers to local apps. Using incognito mode or a VPN can’t fully protect users against it. A Meta spokesperson has called this a ‘miscommunication,’ which seems to be an attempt to underplay the situation. Denmark’s Ministry of Digitalization has recently announced that it will leave the Microsoft ecosystem in favor of Linux and other open-source software. Minister Caroline Stage Olsen revealed this in an interview with Politiken, the country’s leading newspaper. According to Olsen, the Ministry plans to switch half of its employees to Linux and LibreOffice by summer, and the rest by fall. The announcement comes after Denmark’s largest cities – Copenhagen and Aarhus – made similar moves earlier this month. Why the Danish Ministry of Digitalization Switched to Open-Source Software The three main reasons Denmark is moving away from Microsoft are costs, politics, and security. In the case of Aarhus, the city was able to slash its annual costs from 800K kroner to just 225K by replacing Microsoft with a German service provider.  The same is a pain point for Copenhagen, which saw its costs on Microsoft balloon from 313M kroner in 2018 to 538M kroner in 2023. It’s also part of a broader move to increase its digital sovereignty. In her LinkedIn post, Olsen further explained that the strategy is not about isolation or digital nationalism, adding that they should not turn their backs completely on global tech companies like Microsoft.  Instead, it’s about avoiding being too dependent on these companies, which could prevent them from acting freely. Then there’s politics. Since his reelection earlier this year, US President Donald Trump has repeatedly threatened to take over Greenland, an autonomous territory of Denmark.  In May, the Danish Foreign Minister Lars Løkke Rasmussen summoned the US ambassador regarding news that US spy agencies have been told to focus on the territory. If the relationship between the two countries continues to erode, Trump can order Microsoft and other US tech companies to cut off Denmark from their services. After all, Microsoft and Facebook’s parent company Meta, have close ties to the US president after contributing $1M each for his inauguration in January. Denmark Isn’t Alone: Other EU Countries Are Making Similar Moves Denmark is only one of the growing number of European Union (EU) countries taking measures to become more digitally independent. Germany’s Federal Digital Minister Karsten Wildberger emphasized the need to be more independent of global tech companies during the re:publica internet conference in May. He added that IT companies in the EU have the opportunity to create tech that is based on the region’s values. Meanwhile, Bert Hubert, a technical advisor to the Dutch Electoral Council, wrote in February that ‘it is no longer safe to move our governments and societies to US clouds.’ He said that America is no longer a ‘reliable partner,’ making it risky to have the data of European governments and businesses at the mercy of US-based cloud providers. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, experienced a disconnection from his Microsoft-based email account, sparking uproar across the region.  Speculation quickly arose that the incident was linked to sanctions previously imposed on the ICC by the Trump administration, an assertion Microsoft has denied. Earlier this month, the chief prosecutor of the International Criminal Court (ICC), Karim Khan, disconnection from his Microsoft-based email account caused an uproar in the region. Some speculated that this was connected to sanctions imposed by Trump against the ICC, which Microsoft denied. Weaning the EU Away from US Tech is Possible, But Challenges Lie Ahead Change like this doesn’t happen overnight. Just finding, let alone developing, reliable alternatives to tools that have been part of daily workflows for decades, is a massive undertaking. It will also take time for users to adapt to these new tools, especially when transitioning to an entirely new ecosystem. In Aarhus, for example, municipal staff initially viewed the shift to open source as a step down from the familiarity and functionality of Microsoft products. Overall, these are only temporary hurdles. Momentum is building, with growing calls for digital independence from leaders like Ministers Olsen and Wildberger.  Initiatives such as the Digital Europe Programme, which seeks to reduce reliance on foreign systems and solutions, further accelerate this push. As a result, the EU’s transition could arrive sooner rather than later As technology continues to evolve—from the return of 'dumbphones' to faster and sleeker computers—seasoned tech journalist, Cedric Solidon, continues to dedicate himself to writing stories that inform, empower, and connect with readers across all levels of digital literacy. With 20 years of professional writing experience, this University of the Philippines Journalism graduate has carved out a niche as a trusted voice in tech media. Whether he's breaking down the latest advancements in cybersecurity or explaining how silicon-carbon batteries can extend your phone’s battery life, his writing remains rooted in clarity, curiosity, and utility. Long before he was writing for Techreport, HP, Citrix, SAP, Globe Telecom, CyberGhost VPN, and ExpressVPN, Cedric's love for technology began at home courtesy of a Nintendo Family Computer and a stack of tech magazines. Growing up, his days were often filled with sessions of Contra, Bomberman, Red Alert 2, and the criminally underrated Crusader: No Regret. But gaming wasn't his only gateway to tech.  He devoured every T3, PCMag, and PC Gamer issue he could get his hands on, often reading them cover to cover. It wasn’t long before he explored the early web in IRC chatrooms, online forums, and fledgling tech blogs, soaking in every byte of knowledge from the late '90s and early 2000s internet boom. That fascination with tech didn’t just stick. It evolved into a full-blown calling. After graduating with a degree in Journalism, he began his writing career at the dawn of Web 2.0. What started with small editorial roles and freelance gigs soon grew into a full-fledged career. He has since collaborated with global tech leaders, lending his voice to content that bridges technical expertise with everyday usability. He’s also written annual reports for Globe Telecom and consumer-friendly guides for VPN companies like CyberGhost and ExpressVPN, empowering readers to understand the importance of digital privacy. His versatility spans not just tech journalism but also technical writing. He once worked with a local tech company developing web and mobile apps for logistics firms, crafting documentation and communication materials that brought together user-friendliness with deep technical understanding. That experience sharpened his ability to break down dense, often jargon-heavy material into content that speaks clearly to both developers and decision-makers. At the heart of his work lies a simple belief: technology should feel empowering, not intimidating. Even if the likes of smartphones and AI are now commonplace, he understands that there's still a knowledge gap, especially when it comes to hardware or the real-world benefits of new tools. His writing hopes to help close that gap. Cedric’s writing style reflects that mission. It’s friendly without being fluffy and informative without being overwhelming. Whether writing for seasoned IT professionals or casual readers curious about the latest gadgets, he focuses on how a piece of technology can improve our lives, boost our productivity, or make our work more efficient. That human-first approach makes his content feel more like a conversation than a technical manual. As his writing career progresses, his passion for tech journalism remains as strong as ever. With the growing need for accessible, responsible tech communication, he sees his role not just as a journalist but as a guide who helps readers navigate a digital world that’s often as confusing as it is exciting. From reviewing the latest devices to unpacking global tech trends, Cedric isn’t just reporting on the future; he’s helping to write it. View all articles by Cedric Solidon Our editorial process The Tech Report editorial policy is centered on providing helpful, accurate content that offers real value to our readers. We only work with experienced writers who have specific knowledge in the topics they cover, including latest developments in technology, online privacy, cryptocurrencies, software, and more. Our editorial policy ensures that each topic is researched and curated by our in-house editors. We maintain rigorous journalistic standards, and every article is 100% written by real authors.
    Like
    Love
    Wow
    Sad
    Angry
    526
    2 Comentários 0 Compartilhamentos 0 Anterior
  • Fusion and AI: How private sector tech is powering progress at ITER

    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.  
    Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion. 
    Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion. 
    “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research. 
    Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams.
    A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on. 
    But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties.
    “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.” 
    The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue. 
    While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.” 
    Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.” 
    It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools. 

    Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in.
    Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said. 
    The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life. 
    And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser.
    “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.” 
    Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays. 
    Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery. 
    Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said. 
    It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun.
    As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.” 
    If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    #fusion #how #private #sector #tech
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence, already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understandingto explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2Cprotocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired.  #fusion #how #private #sector #tech
    WWW.COMPUTERWEEKLY.COM
    Fusion and AI: How private sector tech is powering progress at ITER
    In April 2025, at the ITER Private Sector Fusion Workshop in Cadarache, something remarkable unfolded. In a room filled with scientists, engineers and software visionaries, the line between big science and commercial innovation began to blur.   Three organisations – Microsoft Research, Arena and Brigantium Engineering – shared how artificial intelligence (AI), already transforming everything from language models to logistics, is now stepping into a new role: helping humanity to unlock the power of nuclear fusion.  Each presenter addressed a different part of the puzzle, but the message was the same: AI isn’t just a buzzword anymore. It’s becoming a real tool – practical, powerful and indispensable – for big science and engineering projects, including fusion.  “If we think of the agricultural revolution and the industrial revolution, the AI revolution is next – and it’s coming at a pace which is unprecedented,” said Kenji Takeda, director of research incubations at Microsoft Research.  Microsoft’s collaboration with ITER is already in motion. Just a month before the workshop, the two teams signed a Memorandum of Understanding (MoU) to explore how AI can accelerate research and development. This follows ITER’s initial use of Microsoft technology to empower their teams. A chatbot in Azure OpenAI service was developed to help staff navigate technical knowledge, on more than a million ITER documents, using natural conversation. GitHub Copilot assists with coding, while AI helps to resolve IT support tickets – those everyday but essential tasks that keep the lights on.  But Microsoft’s vision goes deeper. Fusion demands materials that can survive extreme conditions – heat, radiation, pressure – and that’s where AI shows a different kind of potential. MatterGen, a Microsoft Research generative AI model for materials, designs entirely new materials based on specific properties. “It’s like ChatGPT,” said Takeda, “but instead of ‘Write me a poem’, we ask it to design a material that can survive as the first wall of a fusion reactor.”  The next step? MatterSim – a simulation tool that predicts how these imagined materials will behave in the real world. By combining generation and simulation, Microsoft hopes to uncover materials that don’t yet exist in any catalogue.  While Microsoft tackles the atomic scale, Arena is focused on a different challenge: speeding up hardware development. As general manager Michael Frei put it: “Software innovation happens in seconds. In hardware, that loop can take months – or years.”  Arena’s answer is Atlas, a multimodal AI platform that acts as an extra set of hands – and eyes – for engineers. It can read data sheets, interpret lab results, analyse circuit diagrams and even interact with lab equipment through software interfaces. “Instead of adjusting an oscilloscope manually,” said Frei, “you can just say, ‘Verify the I2C [inter integrated circuit] protocol’, and Atlas gets it done.”  It doesn’t stop there. Atlas can write and adapt firmware on the fly, responding to real-time conditions. That means tighter feedback loops, faster prototyping and fewer late nights in the lab. Arena aims to make building hardware feel a little more like writing software – fluid, fast and assisted by smart tools.  Fusion, of course, isn’t just about atoms and code – it’s also about construction. Gigantic, one-of-a-kind machines don’t build themselves. That’s where Brigantium Engineering comes in. Founder Lynton Sutton explained how his team uses “4D planning” – a marriage of 3D CAD models and detailed construction schedules – to visualise how everything comes together over time. “Gantt charts are hard to interpret. 3D models are static. Our job is to bring those together,” he said.  The result is a time-lapse-style animation that shows the construction process step by step. It’s proven invaluable for safety reviews and stakeholder meetings. Rather than poring over spreadsheets, teams can simply watch the plan come to life.  And there’s more. Brigantium is bringing these models into virtual reality using Unreal Engine – the same one behind many video games. One recent model recreated ITER’s tokamak pit using drone footage and photogrammetry. The experience is fully interactive and can even run in a web browser. “We’ve really improved the quality of the visualisation,” said Sutton. “It’s a lot smoother; the textures look a lot better. Eventually, we’ll have this running through a web browser, so anybody on the team can just click on a web link to navigate this 4D model.”  Looking forward, Sutton believes AI could help automate the painstaking work of syncing schedules with 3D models. One day, these simulations could reach all the way down to individual bolts and fasteners – not just with impressive visuals, but with critical tools for preventing delays.  Despite the different approaches, one theme ran through all three presentations: AI isn’t just a tool for office productivity. It’s becoming a partner in creativity, problem-solving and even scientific discovery.  Takeda mentioned that Microsoft is experimenting with “world models” inspired by how video games simulate physics. These models learn about the physical world by watching pixels in the form of videos of real phenomena such as plasma behaviour. “Our thesis is that if you showed this AI videos of plasma, it might learn the physics of plasmas,” he said.  It sounds futuristic, but the logic holds. The more AI can learn from the world, the more it can help us understand it – and perhaps even master it. At its heart, the message from the workshop was simple: AI isn’t here to replace the scientist, the engineer or the planner; it’s here to help, and to make their work faster, more flexible and maybe a little more fun. As Takeda put it: “Those are just a few examples of how AI is starting to be used at ITER. And it’s just the start of that journey.”  If these early steps are any indication, that journey won’t just be faster – it might also be more inspired. 
    Like
    Love
    Wow
    Sad
    Angry
    490
    2 Comentários 0 Compartilhamentos 0 Anterior
  • The Role of the 3-2-1 Backup Rule in Cybersecurity

    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    #role #backup #rule #cybersecurity
    The Role of the 3-2-1 Backup Rule in Cybersecurity
    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like #role #backup #rule #cybersecurity
    WWW.INFORMATIONWEEK.COM
    The Role of the 3-2-1 Backup Rule in Cybersecurity
    Daniel Pearson , CEO, KnownHostJune 12, 20253 Min ReadBusiness success concept. Cubes with arrows and target on the top.Cyber incidents are expected to cost the US $639 billion in 2025. According to the latest estimates, this dynamic will continue to rise, reaching approximately 1.82 trillion US dollars in cybercrime costs by 2028. These figures highlight the crucial importance of strong cybersecurity strategies, which businesses must build to reduce the likelihood of risks. As technology evolves at a dramatic pace, businesses are increasingly dependent on utilizing digital infrastructure, exposing themselves to threats such as ransomware, accidental data loss, and corruption.  Despite the 3-2-1 backup rule being invented in 2009, this strategy has stayed relevant for businesses over the years, ensuring that the loss of data is minimized under threat, and will be a crucial method in the upcoming years to prevent major data loss.   What Is the 3-2-1 Backup Rule? The 3-2-1 backup rule is a popular backup strategy that ensures resilience against data loss. The setup consists of keeping your original data and two backups.  The data also needs to be stored in two different locations, such as the cloud or a local drive.  The one in the 3-2-1 backup rule represents storing a copy of your data off site, and this completes the setup.  This setup has been considered a gold standard in IT security, as it minimizes points of failure and increases the chance of successful data recovery in the event of a cyber-attack.  Related:Why Is This Rule Relevant in the Modern Cyber Threat Landscape? Statistics show that in 2024, 80% of companies have seen an increase in the frequency of cloud attacks.  Although many businesses assume that storing data in the cloud is enough, it is certainly not failsafe, and businesses are in bigger danger than ever due to the vast development of technology and AI capabilities attackers can manipulate and use.  As the cloud infrastructure has seen a similar speed of growth, cyber criminals are actively targeting these, leaving businesses with no clear recovery option. Therefore, more than ever, businesses need to invest in immutable backup solutions.  Common Backup Mistakes Businesses Make A common misstep is keeping all backups on the same physical network. If malware gets in, it can quickly spread and encrypt both the primary data and the backups, wiping out everything in one go. Another issue is the lack of offline or air-gapped backups. Many businesses rely entirely on cloud-based or on-premises storage that's always connected, which means their recovery options could be compromised during an attack. Related:Finally, one of the most overlooked yet crucial steps is testing backup restoration. A backup is only useful if it can actually be restored. Too often, companies skip regular testing. This can lead to a harsh reality check when they discover, too late, that their backup data is either corrupted or completely inaccessible after a breach. How to Implement the 3-2-1 Backup Rule? To successfully implement the 3-2-1 backup strategy as part of a robust cybersecurity framework, organizations should start by diversifying their storage methods. A resilient approach typically includes a mix of local storage, cloud-based solutions, and physical media such as external hard drives.  From there, it's essential to incorporate technologies that support write-once, read-many functionalities. This means backups cannot be modified or deleted, even by administrators, providing an extra layer of protection against threats. To further enhance resilience, organizations should make use of automation and AI-driven tools. These technologies can offer real-time monitoring, detect anomalies, and apply predictive analytics to maintain the integrity of backup data and flag any unusual activity or failures in the process. Lastly, it's crucial to ensure your backup strategy aligns with relevant regulatory requirements, such as GDPR in the UK or CCPA in the US. Compliance not only mitigates legal risk but also reinforces your commitment to data protection and operational continuity. Related:By blending the time-tested 3-2-1 rule with modern advances like immutable storage and intelligent monitoring, organizations can build a highly resilient backup architecture that strengthens their overall cybersecurity posture. About the AuthorDaniel Pearson CEO, KnownHostDaniel Pearson is the CEO of KnownHost, a managed web hosting service provider. Pearson also serves as a dedicated board member and supporter of the AlmaLinux OS Foundation, a non-profit organization focused on advancing the AlmaLinux OS -- an open-source operating system derived from RHEL. His passion for technology extends beyond his professional endeavors, as he actively promotes digital literacy and empowerment. Pearson's entrepreneurial drive and extensive industry knowledge have solidified his reputation as a respected figure in the tech community. See more from Daniel Pearson ReportsMore ReportsNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also Like
    Like
    Love
    Wow
    Sad
    Angry
    519
    2 Comentários 0 Compartilhamentos 0 Anterior
Páginas Impulsionadas
CGShares https://cgshares.com