
How Agentic AI Is Revolutionizing SecurityAnd How To Keep It Safe
www.forbes.com
It's important to empower the future of automation with agentic AI, while safeguarding against ... More emerging security risks.gettyOne of the most promising developments in technology today is agentic AIthe evolution of AI tools that can perform complex, multi-step tasks autonomously and make contextual decisions with minimal human intervention. Unlike the standard generative AI models that have been the primary focus since ChatGPT came onto the scene, agentic AI is designed to operate independently, executing high-level commands and learning from its experiences.This capability holds immense potential across industries, from automating software development to revolutionizing cybersecurity operations. However, as AI systems begin to take on more autonomy, the security challenges they present must be addressed proactively.A Game Changer for BusinessAI agents are no longer limited to simple, reactive tasks like text generation or code completion. They now possess the ability to execute complex workflows, adapt to new situations and make decisions on the fly. Itamar Golan, CEO and co-founder of Prompt Security, noted, Agentic AI differs from traditional GenAI tools in their ability to independently perform multi-step tasks and make contextual decisions. This ability to autonomously complete tasks is not just a time-saver; it can fundamentally transform how organizations approach operations, particularly in IT and security.A prime example of agentic AI in action comes from Amazon Web Services, where AI agents were used to automate the transition of Java applications from older versions to Java 17. Chris Betz, CISO of AWS, explained, It's not just a recompile. You actually have to go through and rewrite the code to make it Java 17 compatible.This process, which would traditionally require weeks of effort from developers for each application, was completed in a fraction of the time by leveraging agentic AI. These tools allow developers to focus on more innovative tasks, while AI handles the heavy lifting of routine updates and transitions. Betz estimated that AWS saved about 4500 years of developer work by going through building this tool.That said, the rise of agentic AI also introduces new risks, particularly around security and control. As Patrick Xu, co-founder and CTO at Aurascape AI, notes, With the advent of agentic AI, these technologies naturally become attractive targets for malicious actors. We can expect attackers to continuously innovate and devise novel ways to exploit AI-driven systems.This new attack surface requires robust safeguards to ensure that AI agents operate securely within their designated tasks.Key Security Considerations for Agentic AIWhile agentic AI promises significant operational efficiency, it also brings security risks that cannot be ignored. These risks stem from the AIs ability to execute actions without human oversight, its broad system access and its real-time decision-making loops. To mitigate these challenges, organizations must implement a comprehensive security framework.1. Authentication and AuthorizationAs agentic AI agents gain more responsibilities, ensuring strict control over what they can access is crucial. This means implementing proper authentication and authorization protocols to prevent unauthorized access to critical systems. According to Ariful Huq, co-founder at Exaforce, A critical enabler for secure, agentic AI is robust identity and permission management that establishes clear provenance for every action an AI agent takes on a users behalf. Ensuring that agents can only access the resources they need is key to minimizing potential security risks.2. Output ValidationOne of the most critical components of AI security is output validation. Just as user input is considered untrusted until validated, AI-generated output must undergo rigorous scrutiny before being acted upon. AI systems, like any software, are prone to errors, and their autonomous nature means these errors can have widespread impacts if left unchecked. Proper validation ensures that AI outputs are reliable and aligned with organizational standards.3. SandboxingAI agents should never be allowed to execute code or perform tasks in a live environment without first being tested in a controlled, isolated sandbox. Sandboxing allows organizations to catch any errors or unexpected behaviors before they affect production systems. By implementing this practice, organizations can ensure that AI-generated actions are safe and do not pose a threat to the larger system.4. Transparent LoggingTransparency is essential for maintaining control over AI actions. Detailed logging of every step an AI agent takes allows security teams to understand how decisions are made and track any potential issues. This is particularly important for accountability and troubleshooting. When you have an AI agent, you want to know what it did and how it got there, says Chris Betz. Detailed logs provide the insight needed to diagnose problems and improve security practices over time.5. Continuous Testing and MonitoringGiven the evolving nature of AI, continuous security testing is essential. Organizations should implement red-teaming and penetration testing to assess vulnerabilities within their AI systems and ensure they are resistant to new threats. As Ori Bendet, VP of product management at Checkmarx, highlights, With agentic AI, automated security is easy, securing the automation process is harder. Ongoing testing and monitoring help ensure that AI systems remain secure as they evolve.Risks and Biases in Agentic AIAs with all AI technologies, agentic AI raises important ethical concerns. One of the most pressing issues is the potential for AI to inherit biases from its training data. AI agents, when trained on biased or incomplete data, can make flawed or discriminatory decisions. In cybersecurity, for example, AI systems used to monitor network traffic or respond to incidents could potentially introduce new risks if they misinterpret their tasks or make biased decisions.Nicole Carignan, SVP at Darktrace, warns, Without proper oversight, agentic AI may misinterpret their tasks, leading to unintended behaviors that could introduce new security risks. Organizations must remain vigilant in ensuring that AI agents are trained on high-quality, unbiased data and are regularly audited for fairness and accuracy.The autonomous nature of agentic AI means that these systems can be manipulated, much like human employees. Just as attackers use social engineering to trick people, AI agents can be tricked into executing malicious actions. Guy Feinberg, growth product manager at Oasis Security, points out, The real risk isnt AI itself, but the fact that organizations dont manage these non-human identities (NHIs) with the same security controls as human users.Organizations must treat AI agents like human identities, assigning them appropriate permissions, monitoring their activity and implementing clear policies to prevent abuse.Empowering Innovation Without Losing ControlDespite their growing autonomy, agentic AI systems should be seen as tools that augment human capabilities, not as replacements for human oversight. While AI agents can handle repetitive and time-consuming tasks, human judgment is still required to ensure that outputs align with organizational goals and ethical standards. As Chris Betz notes, AI is here to make people go better and faster, not to replace them. Its about augmentation, not replacement.For businesses to fully realize the potential of agentic AI, they must maintain a balance between automation and human oversight. By leveraging AI to handle routine tasks, organizations can free up human employees to focus on more strategic, creative and high-value work.Brian Murphy, CEO of ReliaQuest also stressed that agentic AI can automate many tasks, but human judgment will remain crucial. I personally do not believe we are ever going to separate a trained and skilled human in the last mile decision making.Balancing Innovation with SecurityThe future of agentic AI holds tremendous promise. As these intelligent systems continue to evolve, they will drive innovation, improve efficiency and create new opportunities for organizations across industries. However, with this power comes significant responsibility. To fully harness the potential of agentic AI, businesses must implement robust security practices, maintain human oversight and ensure that ethical concerns are addressed. By doing so, they can unlock the transformative power of AI while safeguarding their systems against emerging threats.Agentic AI represents the next step in automating generative AI. As soon as humans become less prevalent, the risk of failure increases, but with proper safeguards in place, the benefits far outweigh the risks, says David Benas, principal security consultant at Black Duck.With thoughtful security frameworks and responsible oversight, agentic AI has the potential to transform industries and redefine the way businesses operateempowering a future where automation and human creativity work hand in hand.
0 Comments
·0 Shares
·42 Views