WWW.INFORMATIONWEEK.COM
The Intellectual Property Risks of GenAI
Lisa Morgan, Freelance WriterNovember 1, 20249 Min ReadImar de Waard via Alamy StockGenerative AIs wildfire adoption is both a blessing and a curse. On one hand, many people are using GenAI to work more efficiently, and businesses are trying to scale it in an enterprise-class way. Meanwhile, the courts and regulators arent moving at warp speed, so companies need to be very smart about what theyre doing or risk intellectual property (IP) infringement, leakage, misuse and abuse.The law is certainly behind the business and technology adoptions right now, so a lot of our clients are entering into the space, adopting AI, and creating their own AI tools without a lot of guidance from the courts, in particular around copyright law, says Sarah Bro, a partner at law firm McDermott Will & Emery. Ive been really encouraged to see business and legal directives help mitigate risks or manage relationships around the technology and use, and parties really trying to be proactively thinking about how to address things when we dont have clear-cut legal guidance on every issue at this point.Why C-Suites and Boards Need to Get Ahead of This NowGenAI can lead to four types of IP infringement: copyright, trademark, patent, and trade secrets. Thus far, theres been more attention paid to the business competitiveness aspect of GenAI than the potential risks of its usage, which means that companies are not managing risks as adeptly as they should.Related:The C-suite needs to think about how employees are using confidential and proprietary data. What gives us a competitive advantage? says Brad Chin, IP partner at the Bracewell law firm. Are they using it in marketing for branding a new product or process? Are they using generative AI to create reports? Is the accounting department using generative AI to analyze data they might get from a third party?Historically, intellectual property protection has involved non-disclosure agreements (NDAs), and that has not changed. In fact, NDAs should cover GenAI. However, according to Chin, using the companys data, and perhaps others data, in a GenAI tool raises the question of whether the companys trade secrets are still protected.We dont have a lot of court precedent on that yet, but thats one of the considerations courts look at in a companys management of its trade secrets: what procedures, protocols, practices they put in place, so its important for C-suite executives to understand that risk is not only of the information their employees are putting into AI, but also the AI tools that their employees may be using with respect to someone elses information or data, says Chin. Most company NDAs and general corporate agreements dont have provisions that account for the use of generative AI or AI tools.Related:Some features of AI development make GenAI a risk from a copyright and confidentiality standpoint.To train machine learning models properly, you need a lot of data. Most savvy AI developers cut their teeth in academic environments, where they werent trained to consider copyright or privacy. They were simply provided public datasets to play with, says Kirk Sigmon, an intellectual property lawyer and partner at the Banner Witcoff law firm, in an email interview. As a result, AI developers inside and outside the company arent being limited in terms of what they can use to train and test models, and theyre very tempted to grab whatever they can to improve their models. This can be dangerous: It means that, perhaps more than other developers they might be tempted to overlook or not even think about copyright or confidentiality issues.Similarly, the art and other visual elements used in generative AI, such as Gemini and DALL-E, may be copyright protected, and logos may be trademark protected. GenAI could also result in patent-related issues, according to Bracewells Chin.Related:A third party could get access to information inputted into generative AI, which comes up with five different solutions, says Chin. If the company that has the information then files patents on that technology, it could exclude or preclude that original company from getting that part of the market.Boards and C-Suites Need to Prioritize GenAI DiscussionsBoards and C-suites that have not yet had discussions about the potential risks of GenAI need to start now.Employees can use and abuse generative AI even when it is not available to them as an official company tool. It can be really tempting for a junior employee to rely on ChatGPT to help them draft formal-sounding emails, generate creative art for a PowerPoint presentation and the like. Similarly, some employees might find it too tempting to use their phone to query a chatbot regarding questions that would otherwise require intense research, says Banner Witcoffs Sigmon. Since such uses dont necessarily make themselves obvious, you cant really figure out if, for example, an employee used generative AI to write an email, much less if they provided confidential information when doing so. This means that companies can be exposed to AI-related risk even when, on an official level, they may not have adopted any AI.Emily Poler, founding partner at Poler Legal, wonders what would happen if the GenAI platform a company uses becomes unavailable.Nobody knows whats going to happen in the various cases that have been brought against companies offering AI platforms, but one possible scenario is that OpenAI and other companies in the space have to destroy the LLM theyve created because the LLMs and/or the output from those LLMs amounts to copyright infringement on a massive scale, says Poler in an email interview. Relatedly, what happens to your companys data if the generative AI platform youre using goes bankrupt? Another company could buy up this data in a bankruptcy proceeding and your company might not have a say.Another point to consider is whether the generative AI platform can use a companys data to refine its LLMs, and if so, whether there are any protections against the companys confidential information being leaked to a third party. Theres also the question of how organizations will ensure employees dont rely on AI-generated hallucinations in their work, she says.Time to Update PoliciesBracewells Chin recommends doing an audit before creating or updating a policy so its clear how and why employees are using GenAI, for what purpose and what they are trying to achieve.The audit should help you understand the who, what, why, when, and where questions and then putting best practices [in place] -- you can use it, you cant use it, you can use it with these certain restrictions, says Chin. Education is also really important.Jason Raeburn, a partner in the litigation department of law firm Paul Hastings, says the key point is for CIOs and the C-suite to really engage with and understand the specific use cases for GenAI within their particular industry to assess what risks, if any, arise for their organization.As is the case with the use of technology within any large organization, successful implementation involves a careful and specific evaluation of the tech, the context of use, and its wider implications including intellectual property frameworks, regulatory frameworks, trust, ethics and compliance, says Raeburn in an email interview. Policies really need to be tailored to the needs of the organization, but at a minimum, they should include a GenAI in the workplace policy so there is clarity as to what the employer considers to be appropriate and inappropriate use for business purposes.Zara Watson Young, co-founder and CEO at the Watson & Young IP law firm, says the board, CEO and C-suite should regularly discuss how GenAI affects their IP strategies.These conversations should identify potential gaps in current policies, keep everyone informed about shifts in the legal landscape and ensure that the team understands the nuances of AIs impact on copyright and trademark laws, says Watson in an email interview. Equally important are discussions with counsel, focusing on developing robust IP policies for AI usage, ensuring compliance and implementing enforcement strategies to protect the companys rights.In the absence of concrete regulations and standards of practice, companies should develop their own policies based on how they use generative AI. According to Poler Legals Poler these policies should be split into two types, so they address both sides of the generative AI process: data gathering and training and output generation.Policies for data gathering and training need to be clear on how and what data is used, whether any third-party involvement is part of that process, the vetting and monitoring process for the data, how the data is stored, and how the company is protecting and securing that data, says Poler. The biggest concerns are privacy, security and infringement. These policies need to be up to date with all regulations, especially for international usage.Companies using their own datasets and models can better vet, monitor, and control data and models. However, the companies using third-party datasets and models need to do their due diligence on them and ensure transparency, security, legal compliance, and ethical usage, such as removing bias.Policies for output generation should be centered around monitoring. Companies should develop policies that contain how the monitoring is done for privacy and intellectual property concerns, says Poler. These policies need to contain instructions and procedures on how outputs are before they are ultimately used with checklists of important criteria to detect confidential information and protect it intellectual property.Banner Witcoffs Sigmon says companies should establish policies that strike a careful balance between the usefulness of AI enabled tools and the liability risks they pose. For instance, employees should be strongly discouraged from using any external AI tools that have not been fully tested and approved by their employer.Such tools compose both the risk of copyright infringement if, for example, they generate infringing content and a risk of confidential information loss such as if the employee discloses confidential information the AI and that information is stored, used for future training, or the like, says Sigmon. In turn, this means that if a company decides to use an AI tool, it should understand that tool deeply: how it operates, what data set were used to train it, who assumes liability if copyright infringement occurs and/or if sensitive data is exfiltrated, and [more].Bottom LineThe wildfire adoption and use of GenAI has outpaced sound risk management. Organizational leaders need to work cohesively to ensure that GenAI usage is in the companys best interests and that the potential risks and liabilities are understood and managed accordingly.Check to see whether your companys policies are up to date. If not, the time to start talking internally and with counsel is now.About the AuthorLisa MorganFreelance WriterLisa Morgan is a freelance writer who covers business and IT strategy and emergingtechnology for InformationWeek. She has contributed articles, reports, and other types of content to many technology, business, and mainstream publications and sites including tech pubs, The Washington Post and The Economist Intelligence Unit. Frequent areas of coverage include AI, analytics, cloud, cybersecurity, mobility, software development, and emerging cultural issues affecting the C-suite.See more from Lisa MorganNever Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.SIGN-UPYou May Also LikeWebinarsMore WebinarsReportsMore Reports
0 Комментарии
0 Поделились
63 Просмотры