The ethical minefield of GenAI
uxdesign.cc
What you need to know and how you can use it responsiblyIm sure James Madison, the father of copyright law, would have had something to say about Generative AI.Open AI is the antichrist.Thats how a conversation started with a friend of mine in January 2023. He saw the future, and he did not like it. I was living in the future, and saw the benefits.Like any conversation I try to have with my friends, we met in the middle, however uncomfortable it was, and continue to do so to thisday.Here were the positions:As a former journalist, he saw the destruction of content creation, full stop. For some content, he is not wrong. There are particular segments of our workforce that saw the effect that early on, and continue to do so at an accelerated rate.Myself as a former journalist to a lesser degree and working at a document technology startup, I also saw the advantages like transferring some mind-numbing work to a system so people could be more strategic.I walked him back from his initial concernscopyright, bias, privacy and a few other categories listed hereand we agreed that as a domain, legal technology is one of the few use cases where this technology avoids a lot of the challenges.The users utilize it for everything from searching across documents to extracting data from their owncontent.This content is in a private, secure bubble, because its company data it has tobe.Our customers use it responsibly because of who they are: professionals with an eye on the cutting edge yet respecting thousands of years of precedent, and are establishing policies to use the technology responsibly.That last point alone gives me security that everyone will come to their own conclusions about how to use it, but all three validate why so much money is in the document space versus other domains: its one of the few places AI makes sense, and its in a space thats been using machine learning models for years, just not at this level of innovation. Theyve also been doing it responsibly.However, not everyone is there, so you personally have to act accordingly.Generative AI is a tool, a transformation thats going to change our lives, some of it for the better. Like any tool, you have to use it responsibly. Anyone can use a hammer in an irresponsible manner, and the same applieshere.Im going to approach it like Im telling the weather: staying as neutral as possible, but highlighting the concerns that even I have for the technology in the public square. The world isnt a fair place, but you can make it more fair by how you personally act, contributing to a better globalvillage.Its up toyou.CopyrightLets be clear: AI companies like OpenAI, Anthropic and Google used a lot of internet content to train their models, so much that theyre running out ofit.Much of it is copyrighted, and they didnt exactly ask permissionI didnt get an email personally for my blog unless it went to my spam folder, as an example. Ive even gone so far as saying that Wikipedia is probably the foundation of it, and without them they would not exist without Creative Commons as the copyright model.This alone is a really sticky legalproblem.Most of the companies are claiming fair use: Section 107 of the Copyright Act defines it as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, [and] research. Its vague enough that it fits the Ill know when I see itmodel.This interpretation of copyright law is something Im fairly sure that James Madison didnt anticipate in 1787 when he was trying to figure out where to put his AOL CD into the horse. Thats a really generous stance and will have to be resolved legally. It will takedecades.We will need new laws to deal with AI and copyright. This discussion will go on forever among technology experts and lawyers as the technology evolves as with previous copyright issues.And its notnew.As a example, they havent really solved copyright issues for social media networksInstagram claims that you are granting license to them so they can use the contentand this been around for decades. Regulations like GDPR (2018) and CCPA (2020) went into effect fairly recently, and social media companies still find ways to live at theedge.Another example? Font licensing to the chagrin of Adobe and other font owners. Most designers dont know that fonts are not copyrightable and the font foundries have been trying to change that foryears.The AI copyright issue will take decades to resolve, and I guarantee almost no one will be happy with the solution, but well live withit.ResolutionThere is no easy solution, especially for content already consumed by the LLMs. Technology companies are going to do what they have always done, which is have a flexible definition of what they can do and we can adjust accordingly (read: Uber) and this is the casehere.The law will adjust accordingly, itll just taketime.How you can use it responsibly is always consider it as a derivative work; I do a pretty heavy edit, or dont use images that look like anything thats been copyrighted. Ill also use it for an edit pass, but never as the corecontent.Data PrivacyAs a repeat of the last copyright, AI companies like OpenAI and Anthropic use a lot of information. This sometimes includes details that shouldnt be in the public square, whether it be company data or personal information.Theyre trying to keep private information safe, but in the same way its hard for other companies to secure their system because all it takes is one hole, the same goes for this technology.Theres always some way to unlock technology, and someone alwayswill.This is a problem that will never be unique. Being worried about how Open AI is handling your data is ignoring that there are many companies that have much more information about you, and it is just about as secure. Additionally, theres a lot more more information out there that is public or hidden in plain sighttheres an amazing about of information about companies you can find on the SEC Edgar database, for exampleso even the notion of privacy doesnt apply to public companies bylaw.Data privacy will always be an issue in modern society; its how we approach our perception about it is whatmatters.ResolutionIf you want to keep something private, dont put it on the internet, ever. Its up to you to decide what should be there or not. Many companies are establishing policies around thisone particular acquaintance said they have locked down all corporate systemsand thats a goodthing.Protecting sensitive information is a must during thistime.AuthenticityWhen was the last time I saw a LLM hallucinate?Today.I was walking someone through one of the applications, and entered their name. It returned the information of someone completely different at the company, and returned other incorrect information.This doesnt happen a lot, but it does happen more on content that the LLMs dont have a lot of context for, or theres a lot of matches that it cant line up. For example, an acquaintance of mine has a rather common name and it mixes him up with an actor in England. I dont have that problem except for that pesky Senior Vice President of Finance that Im friends with on Facebook, so its the information that is returned for me is most probablyme.We laughed about it and movedon.When you add that all information on the internet isnt factualyes, there isnt an Easter Bunny or Santa Claus either except on some marketing site selling costumes for bothauthenticity is closely related tobias.Theyre trying to fix these problems, but its hard because the LLMs are designed to predict what words come next, not to know whats true. Neither does Google. About every other search engine thats been invented know whats true without human intervention, either.Weve been living with this for the last 30 years, and will continue to do so. It doesnt seem to affect search engine engagement either.Its really up to us to decide whats right and whats wrong. Search engines return wrong results sometimes, and so do the GPTs. Both are learning to get better, and itll taketime.ResolutionI have told everyone that uses any technology that you should always double check what youre seeing like a journalist: One source is an opinion, but a second reliable source is validation. The same confidence that LLMs return information with is akin to the Google search results, and no one seems to be affected bythat.The resolution: Trust andverify.BiasRepeat after me 100 times: People are biased, so data isbiased.The issue of bias in AI models is part of a larger, ongoing conversation about ethics and fairness in artificial intelligence, but we forget these systems are built byhumans.The reality of the world is that it isnt a fair place, and this is reflected in the data we generate as asociety.For example, the greater predictor of success in your life is what zip code you grew up in, full stop. For the record, mine was 92804I went to Walt Disney Elementary School in Anaheim, California. The only benefit was a free trip in the sixthgrade.This is not a new problemwho can forget when facial recognition software performed less accurately for certain racial groups, or job application screening algorithms favoring particular demographicslanguage models are the latest technology to face this challenge.Efforts to address bias in AI are part of a larger push for responsible AI or ethical AI This movement includes not just addressing bias, but also concerns about AI transparency, accountability, privacy protection, and potential misuse.Its going to take a longtime.ResolutionUntil we have transparency, its up to you. As with anything you see, theres going to be a lens of perception that youll have to view the information with. Youll determine how much bias there is, and you will have to convert it to your own mentalmodel.This conversion is something we all do, every day of theweek.We also have to call for responsible AI. Its important that there is some type of global movement to get there. Itll be people and laws. Amix.ConclusionLike technology solutions beforeNapster comes to mindtheres always a way all of this works through the system to less than ideal solutions that we all accept, flaws and all. Musicians dont like the revenue of Spotify, but they still use it and other platforms.Other examples:Search engines have their flaws and dont know the truth, but theres still billions in advertising revenue generated.Merchants might not necessarily like Amazon, but they accept the platform because of the massivereach.We all will accept the risks of Generative AI once we see the benefits.Its all about responsible usage. We learned from the other applications, and well learn from thisone.We have to approach it from a realistic view of not only how it is implemented, but how it is analogous with the technology issues in the past, and this will help us solve the future, plain andsimple.To quote the movieContact:Youre an interesting species. An interesting mix. Youre capable of such beautiful dreams, and such horrible nightmares. You feel so lost, so cut off, so alone, only youre not. See, in all our searching, the only thing weve found that makes the emptiness bearable, is eachother.Be responsible, Trust and verify. Campaign for change so there are adequate guardrails inplace.In my glass half full thoughts are that well get there, one person at atime.This is the end of my TEDtalk.Other articles worth reading aboutthisDesigning in the age ofChatGPTStriking the Balance: Navigating the Ethics of Generative AI and the Need for RegulationEthical Pitfalls of Generative AIPatrick Neeman is the Vice President of User Experience and Research at Evisort and an advisor for Relevvo. He is also the author of Usability Counts, runs the UX Drinking Game. You can read more about him at Pexplexity, and connect with at LinkedIn, X (Formerly Twitter), Threads and Substack.The ethical minefield of GenAI was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 Comments
·0 Shares
·176 Views