
5 Fatal GenAI Mistakes That Could Destroy Your Business In 2025
www.forbes.com
As businesses race to implement generative AI in 2025, they risk making potentially devastating ... [+] mistakes that could result in severe financial and reputational damage.Adobe StockAccording to recent research, 67% of business leaders believe that generative AI will bring significant change to their organizations over the next two years.But in the rush to adopt and deploy this world-changing technology, its pretty likely that mistakes will be made.The downside of this enormous potential is that when things go wrong, the damage can be quite serious too, from reputational harm to harsh fines and, perhaps worst of all, loss of customer trust.So heres my overview of five of the most common mistakes that I believe many businesses and business leaders will make in the coming year so you can plan to avoid them.Omitting Human OversightPowerful and transformative as it undoubtedly is, we cant ignore the fact that generative AI isnt always entirely accurate. In fact, some sources state that factual errors can be found in as many as 46 percent of AI-generated texts. And in 2093, the tech news website CNET paused the publication of AI-generated news stories after having to issue corrections for 41 out of 77 stories. What this means for businesses is that proofreading, fact-checking and keeping a human-in-the-loop is essential if you dont want to run the risk of making yourself look silly.Of course, humans make mistakes, too, and any business involved in information exchange should have robust procedures for verification regardless of whether they use generative AI or not.Substituting GenAI For Human Creativity And AuthenticityMORE FOR YOUAnother mistake I am worried we will see far too frequently is becoming over-reliant on genAI as a substitute for human creativity. This is likely to have negative consequences on the authenticity of a business or a brand voice. While its easy to use ChatGPT or similar tools to churn out huge volumes of emails, blogs, social media posts and suchlike super-fast, this frequently leads to overly generic, uninspiring content that leaves audiences feeling disconnected or even cheated. Video game publisher Activision Blizzard, for example, was recently criticized by fans for using AI slop in place of human-created artwork. Its important to remember that generative AI should be used as a tool to augment human creativity, not to replace it.Failing To Protect Personal Data Unless a generative AI application is run securely on-premises on your own servers, theres often no real knowing what will happen with the data entered into it. OpenAI and Google, for example, both state in their EULAs that data uploaded to their generative chatbots can be reviewed by humans or used to further train their algorithms. This has already caused problems for some organizations Samsung stated that its employees had inadvertently leaked confidential company information by entering it into ChatGPT without being aware of the consequences. Incidents like this create a risk for companies that they will end up in breach of data protection regulations, which can lead to severe penalties. This is likely to be an increasingly common occurrence as more and more companies start using generative AI tools, and organizations particularly those that handle personal customer data at scale should ensure staff are thoroughly educated about these dangers.Overlooking Intellectual Property RisksMany commonly used generative AI tools, including ChatGPT, are trained on vast datasets scraped from the internet, and in many cases, this includes copyrighted data. Due to the lack of maturity in AI regulations, the jury is still out on whether this constitutes a breach of IP rights on the part of AI developers, with several cases currently going through the courts. The buck might not stop there, however. Its been suggested that businesses using genAI tools could also find themselves liable at some point in the future if copyright holders manage to convince courts that their rights have been infringed. As of now, failing to assess whether AI-generated output could contain copyright or trademark-infringing materials is likely to land businesses in hot water in 2025, if they arent taking proactive measures to make sure it doesnt.Not Having A Generative AI Policy In PlaceIf you want to minimize the chances that anyone working for your organization makes any of these mistakes, then probably the best thing to do is to tell them not to. The potential use cases for genAI are so varied, and the opportunities it creates are so vast that it's almost certainly going to be misused at some point. Perhaps the most important single step you can take to reduce the chance of that happening is to have a clear, defined framework in place setting out how it can and cant be used.As far as I'm concerned, this is a no-brainer for every organization that stops short of a blanket ban on generative AI, which would be a pretty big mistake, given the opportunities it creates. Without such a policy in place, you can almost guarantee it will be used without appropriate oversight, overused to the detriment of human creativity, and lead to unauthorized disclosure of personal data, IP infringement, and all the other mistakes covered here.To wrap up in 2025, we will see organizations take huge steps forward as they become increasingly confident, creative and innovative in the way they use generative AI. We will also see mistakes. Being fearful of the transformative potential of generative AI will most likely hand the lead to our competition, but adopting a careful and cautious approach can save us from costly mistakes.
0 Commentaires
·0 Parts
·87 Vue