The DeepSeek Effect Why Your Company Needs an AI Usage Policy and How to Create One
towardsai.net
LatestMachine LearningThe DeepSeek Effect Why Your Company Needs an AI Usage Policy and How to Create One 0 like February 7, 2025Share this postAuthor(s): Pawel Rzeszucinski, PhD Originally published on Towards AI. AI Usage Policy acts as a guiding force for safety and compliance in the turbulent digital world [Source: DALL-E]Whats to fear?The rise of accessible, powerful AI tools has been nothing short of revolutionary. From content creation to code generation, these technologies promise to supercharge productivity and unlock new levels of innovation. But with this incredible power comes significant responsibility, and frankly, a healthy dose of risk.Recent events have thrown this reality into sharp relief. The emergence of free and readily available AI models, like DeepSeek, presents a double-edged sword. While offering impressive capabilities, DeepSeeks aggressive user data collection practices [1], coupled with concerning security test failures [2], serve as a stark warning. Its no longer a question of if AI will impact your organization, but how safely it will be adopted (aka. how much damage it can do when handled irresponsibly).At WebPros, we recognized this shift early on. We understood that simply hoping employees would use AI responsibly wasnt a viable strategy. Compliance in the age of AI isnt just about ticking boxes; its about safeguarding your data, protecting your intellectual property, and ensuring responsible innovation. Dont treat this text as just another general data security concern piece; the business model behind many free and lower-tier AI toolsLets delve deeper into why simply hoping for responsible use is not just naive, but potentially disastrous, especially when considering the data-hungry nature of these readily available AI platforms.When I say Compliance in the age of AI isnt just about ticking boxes, its because the risks are far more nuanced and pervasive than traditional IT compliance. With those free AI tools, were not just talking about users accidentally downloading malware or sharing files on unsecured networks. We are facing a scenario where the core business model of many of these AI services directly relies on leveraging user inputs including your companys confidential data from training and improving their AI models, to extracting meaningful strategic insights on a large (national) scale.Think about it: these tools are often offered at little to no upfront cost precisely because the payment is in the data you provide. Your prompts, your inputs, the text you feed into them, the code you ask them to analyze all of this can be, and often is, explicitly used to refine and enhance the AI model itself and will in the future, more or less explicitly, be available to general public with new model releases. This isnt some hidden clause buried in legalese; its frequently stated quite clearly in their terms of service or privacy policies.Now, lets unpack why this is a huge threat to your companys data, intellectual property, and security compliance:Data leakage and intellectual property theft: Imagine an employee uses a free AI tool to summarize a sensitive internal document, refine a product strategy, debug proprietary code, or the most frequent scenario check (sensitive) email for style and grammar. By doing so, they are effectively feeding potentially confidential information directly into the AI providers system. This data becomes part of the providers training dataset. While they may anonymize it to some extent, the risk of sensitive concepts, unique approaches, and even identifiable snippets of code or text being incorporated into the model and potentially surfacing in responses to other users is very real. This constitutes a significant leak of intellectual property and competitive advantage.Erosion of confidentiality and trade secrets: Company data, especially trade secrets and confidential business information, are legally protected assets. By inputting this data into AI tools that use it for training, you are arguably breaking confidentiality agreements (both explicit and implicit) and jeopardizing the legal protection of your trade secrets. If a competitor later uses the same AI tool and happens to receive outputs that reflect your confidential information (even indirectly), it could lead to legal battles and significant financial repercussions.Compliance violations (GDPR, CCPA, etc.): I should probably start with this pointmany data privacy regulations, like GDPR and CCPA, mandate strict controls over how personal data is processed and shared. If your employees are inputting any form of personal data even indirectly or in pseudonymized forms into AI tools that use it for training, you could be in direct violation of these regulations. The legal and financial penalties for non-compliance can be severe, not to mention the reputational damage. And the AI Act is just around the cornerSecurity vulnerabilities: Beyond data leakage, the very act of sending company data to external, often less-scrutinized, free AI platforms can introduce security vulnerabilities. These platforms might have weaker security protocols than your internal systems, making your data more susceptible to breaches or cyberattacks further down the line.The points above are no joke. The introduction of AI policy is not about stifling innovation it is about creating a framework for safe and productive AI adoption.The AI usage policyIve never seen the interior of a lighthouse. Interesting [Source: DALL-E]Our policy centers around three core pillars:Pillar 1: A list of approved AI toolsThe first crucial step is acknowledging that not all AI tools are created equal. While the allure of the latest free AI offering is strong, its essential to approach adoption with careful consideration. Our policy begins by clearly defining a list of AI tools explicitly approved for use within the company.Why is this so important?Security and data privacy: As highlighted by the DeepSeek example, not all AI providers prioritize user data security and privacy. By vetting and approving tools, we can select platforms that meet our stringent security standards and comply with relevant data protection regulations (like GDPR, CCPA, etc.). The engineering and Data&AI units, together with our legal department, assessed factors like data encryption, data retention policies, and the providers security certifications.Functionality and business needs: Not every AI tool aligns with every business need. Our approved list ensures that employees are directed towards tools that are not only secure but also genuinely useful and relevant to their roles (e.g. GitHub Copilot only for software developers). This prevents the proliferation of shadow IT and ensures that AI adoption is driven by business value, not just novelty.Centralized management and support: By focusing on a defined set of tools, the organisation can effectively manage licenses, provide training, and address any technical issues that arise. This streamlined approach is far more efficient and secure than trying to manage a chaotic landscape of unvetted AI applications.AI tools are not different from other software tools, and should be treated in accordance with analogous software policies. You wouldnt allow employees to download and install any software they find on the internet without IT approval. This curated approach provides control and ensures that the tools being used are safe, reliable, and contribute to, rather than detract from, organizational goals.Pillar 2: Understanding what can and cannot be sharedPerhaps the most critical aspect of an AI usage policy is clarifying what data can and cannot be shared with approved AI tools. AI models learn from data, and the implications of sharing sensitive or confidential information are significant.Our policy explicitly explains the boundaries of data sharing, providing clear guidelines for employees. This, among other, includes:Personally Identifiable Information (PII): Under no circumstances should employees share customer PII, employee PII, or any other data that could be used to identify individuals with AI tools. This is paramount for regulatory compliance and maintaining customer trust.Confidential business information: Trade secrets, financial data, strategic plans, intellectual property, and any other confidential business information must be strictly protected and never shared with AI tools. This safeguards our competitive advantage and prevents potential data leaks to third-party platforms.Internal vs. external data: The policy differentiates between data that might be permissible to share (e.g., publicly available information, anonymized datasets) and data that is strictly off-limits. This provides nuanced guidance and avoids overly broad restrictions that could hinder legitimate AI use cases.Simply stating these rules isnt enough. We widely announced the details of the AI policy during townhalls, departmental all-hands, they sit in easily accesible Confluence spaces, together with a set of instructions on how to use them and where to look for additional training material.Pillar 3: A process for requesting new AI toolsInnovation is constant, especially in the rapidly evolving field of AI. Therefore, a static, inflexible policy is destined to become obsolete quickly. Our policy incorporates a clear and structured process for employees to request the adoption of new AI tools.This process is designed to be both accessible and rigorous, encompassing:Clear submission channel: Employees are provided with a straightforward process for submitting requests, through a designated online form. This ensures that requests are properly tracked and reviewed.Technical and legal review: Each request undergoes a thorough review by both technical and legal teams. The technical review assesses security aspects, data handling practices, and integration capabilities. The legal review examines compliance with relevant regulations, terms of service, and potential legal risks associated with the tool.Policy updates and communication: Approved new tools are added to the official approved list, and the policy is regularly updated and communicated to all employees. This keeps the policy relevant and ensures everyone is aware of the current guidelines.This process fosters so called controlled innovation. It empowers employees to suggest valuable new tools while ensuring that any adoption is carefully vetted and aligned with the organizations security and compliance requirements. Its about embracing progress responsibly, not fearing change.ConclusionsCan we guarantee 100% compliance? Realistically, probably not. Completely preventing the use of unapproved tools is likely impossible. However, that by no means diminish the immense value of having a robust AI Usage Policy.By providing safe and approved tools, clearly outlining data sharing guidelines, and establishing a transparent request process, we significantly reduce the risks associated with uncontrolled AI adoption. We empower our employees to use AI productively and responsibly while simultaneously protecting our organization from potential security breaches, legal liabilities, and reputational damage.In the age of free and powerful AI, complacency is no longer an option. Developing and implementing a comprehensive AI Usage Policy is not just a best practice; its becoming a business imperative. Its time for every organization to navigate this new frontier with caution, foresight, and a clear commitment to responsible AI innovation. The future of work is being shaped by AI, and ensuring that the future is secure and ethical starts with a solid policy today.References[1] https://www.cnbc.com/2025/02/02/why-deleting-chinas-deepseek-ai-may-be-next-for-millions-of-americans.html accessed on 05.02.2025[2] https://www.wired.com/story/deepseeks-ai-jailbreak-prompt-injection-attacks/ accessed on 05.02.2025Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
0 Comentários ·0 Compartilhamentos ·53 Visualizações