WWW.INFORMATIONWEEK.COM
Is Open Source a Threat to National Security?
Open-source software is a lifesaver for startups and enterprises alike as they attempt to deliver value to customers faster. While open source use isnt considered dubious for business use like it once was, the very open nature of it leaves it open to poisoning by bad actors.Open-source AI and software can present serious national security risks -- particularly as critical infrastructure increasingly relies on them. While open-source technology fosters rapid innovation, it doesnt inherently have more vulnerabilities than closed-source software, says Christopher Robinson, chief security architect at the Open Source Security Foundation (OpenSSF). The difference is open-source vulnerabilities are publicly disclosed, while closed-source software may not always reveal its security defects.Incidents such as XZ-Utils backdoor earlier this year demonstrate how sophisticated actors, including nation-states, can target overextended maintainers to introduce malicious code. However, the XZ-Utils backdoor was stopped because the open-source communitys transparency allowed a member to identify the malicious behavior.At the root of these risks are poor software development practices, a lack of secure development training, limited resources, and insufficient access to security tools, such as scanners or secure build infrastructure. Also, the lack of rigorous vetting and due diligence by software consumers exacerbates the risk, says Robinson. The threats are not limited to open source but extend to closed-source software and hardware, pointing to a broader, systemic issue across the tech ecosystem. To prevent exploitation on a national level, trust in open-source tools must be reinforced by strong security measures.Related:Open Source: Get What You Paid For?A primary threat is the lack of support and funding for open-source maintainers, many of whom are unpaid volunteers. Organizations often adopt open-source software without vetting security, assuming volunteers will manage it.Another often overlooked issue is conflating trust with security. Simply being a trusted maintainer doesnt ensure a projects security. Lawmakers and executives need to recognize that securing open source demands structured, ongoing support.AI systems, whether open or closed source, are susceptible to prompt injection and model training tampering. OWASPs recent top 10 AI threats list highlights these threats, underscoring the need for robust security practices in AI development. Since AI development is software development, it can benefit from appropriate security engineering, says Robinson. OWASP is the Open Worldwide Application Security Project. Without these practices, AI systems become highly susceptible to serious threats. Recognizing and addressing these vulnerabilities is essential to a secure open-source ecosystem.Related:At the company level, boards and executives need to understand that using open-source software involves effective due diligence and monitoring and contributing back to its maintenance. This includes adopting practices like creating and sharing software bills of materials (SBOMs) and providing resources to support maintainers. Fellowship programs can also provide sustainable support by involving students or early-career professionals in maintaining essential projects. These steps will create a more resilient open-source ecosystem, benefiting national security.Mitigating threats to open source requires a multifaceted approach that includes proactive security practices, automated tools, and industry collaboration and support. Tools like OpenSSFs Scorecard, GUAC, OSV, OpenVEX, Protobom, and gittuf can help identify vulnerabilities early by assessing dependencies and project security, says Robinson. Integrating these tools into development pipelines ensures that high-risk issues are identified, prioritized and addressed promptly. Additionally, addressing sophisticated threats from nation-states and other malicious actors requires collaboration and information-sharing across industries and government.Related:Sharing threat intelligence and establishing national-level protocols will keep maintainers informed about emerging risks and better prepared for attacks. By supporting maintainers with the right resources and fostering a collaborative intelligence network, the open-source ecosystem can become more resilient.Infrastructure Is at RiskWhile the widespread use of open-source components accelerates development and reduces costs, it can expose critical infrastructure to vulnerabilities.Open-source software is often more susceptible to exploitation than proprietary code, with research showing it accounts for 95% of all security risks in applications. Malicious actors can inject flaws or backdoors into open-source packages, and poorly maintained components may remain unpatched for extended periods, heightening the potential for cyberattacks, says Nick Mistry, CISO at software supply chain security management company Lineaje. As open-source software becomes deeply embedded in both government and private-sector systems, the attack surface grows, posing a real threat to national security.To mitigate these risks, lawmakers and C-suite executives must prioritize the security of open-source components through stricter governance, transparent supply chains and continuous monitoring.Dependencies Are a ProblemOpen-source AI and software carry unique security considerations, particularly given the scale and interconnected nature of AI models and open-source contributions.The open-source supply chain presents a unique security challenge. On one hand, the fact that more people are looking at the code can make it more secure, but on the other hand, anyone can contribute, creating new risks, says Matt Barker, VP & global head, workload identity architecture at machine identity security company Venafi, a CyberArk Company. This requires a different way of thinking about security, where the very openness that drives innovation also increases potential vulnerabilities if were not vigilant about assessing and securing each component. However, its also essential to recognize that open source has consistently driven innovation and resilience across industries.Organizational leaders must prioritize rigorous evaluation of open-source components and ensure safeguards are in place to track, verify, and secure these contributions.Many may be underestimating the implications of mingling data, models, and code within open-source AI definitions. Traditionally, open source is applied to software code alone, but AI relies on various complex elements like training data, weights and biases, which dont fit cleanly into the traditional open-source model, says Barker. By not distinguishing between these layers, organizations may unknowingly expose sensitive data or models to risk. Additionally, reliance on open source for core infrastructure without robust verification procedures or contingencies can leave organizations vulnerable to cascading issues if an open-source component is compromised.Thus far, the US federal government has not imposed limits on open-source AI.If weve learned anything from AI these past few years, its that there are certainly great benefits and also great dangers, says Edward Tian, CEO of GenAI detection software provider GPTZero. On one hand, not imposing limits on open-source AI is beneficial when it comes to accessibility and equity. It better prevents monopolies and AI technology only being shaped by a few people. On the other hand, that also means AI can more easily be put in the hands of bad actors. This means there is a greater risk for AI being used for harm, like more advanced cyberattacks or scams, so it absolutely has the possibility of being a threat to national security.Governance MattersIn an AI context, open-source poisoning involves the manipulation of natural language models, potentially leading to security breaches and online manipulation. This can manifest in discriminatory outcomes, influence on public opinion and disruptions in critical infrastructure like power grids and transportation systems.To address open-source software risks, organizations should implement a robust governance strategy encompassing dependency management, diversified reliance, proactive vulnerability scanning and regular patching, says Ignacio Llorente, CEO at cloud and edge solution provider and consultancy OpenNebula. Security audits, code reviews, monitoring project health, and active community engagement are crucial for staying informed on emerging vulnerabilities and best practices, thereby enhancing the security and reliability of open-source integrations.Meanwhile, the White House is in transition while the accelerated pace of AI adoption and innovation continue.I would expect nothing less from [adversaries] to leverage open-source AI as a way to jeopardize national security whether it be data and information or whether it be [a] nation-state backed motive with deepfake, says Chris Hills, chief security strategist at cybersecurity company BeyondTrust. Boards and C-suites need to understand the risk, how it relates to their business, and what they can do to overcome the risk versus rewards for usage. They also need to understand that no matter how much they want to try to block the usage, the end user has far too many resources that will allow them or enable them to overcome any boundary put in place. Therefore, understanding the usage risk and educating their end users will help minimize the risk related to open-source AI usage.A Front-Row SeatAaron Shaha, chief of threat research and intelligence at SaaS-based MDR solution provider Blackpoint Cyber, says he finds watching the poisoning of open-source libraries and code distressing.Care and diligence should be used to ensure vetted libraries and distributions are used to limit risk. Consider having an AI policy that all workers read and sign, to prevent intellectual property issues, as well as hallucination problems, says Shaha. Adversarial governments and malicious hackers poisoning open-source code is a large problem. Care must be taken in implementation, as well as a renewed review process of code and binaries.Phil Morris, advisory CISO & managing director at security solution provider NetSPI, says the number of open-source models available on Hugging Face has increased over 10,000% in the past five years. With that level of growth, the potential for introducing vulnerabilities into a corporate ecosystem is a significant threat that must be addressed proactively.To mitigate risks of open-source AI, companies should implement governance teams, technical feasibility groups, and security awareness training to set guardrails for the appropriate use of AI. There are realistic attack vectors for open-source software, so this is a fresh opportunity to educate your leadership on how to manage these unique risks, says Morris. Just as with other instances of shadow IT, your risk profile has increased. Are you breaking down silos between the data science teams and the operational teams that have to support and monitor this technology? Are you running red-team exercises against these deployments? These are two best practices that can be overlooked in the rush to build and deploy these platforms.Its also important to understand the difference between vulnerabilities and threats.Over 62% of the open-source code in a typical app/API is never used and creates no danger, even if it has known vulnerabilities (CVEs), says Jeff Williams, co-founder and CTO at runtime application security company Contrast Security. Consequently, only five to 10% of CVEs in real world applications are actually exploitable. I recommend getting runtime context to confirm exploitability before investing in fixing issues that aren't dangerous.:Most organizations analyze open-source code and custom code separately, which obscures many risks and causes organizations to have a false sense of security.Custom code risks are more prevalent and more critical than open-source issues, says Williams. Organizations should leverage runtime security to analyze fully assembled applications and APIs, including custom code, libraries, frameworks and servers together.
0 Comments
0 Shares
33 Views