WWW.FORBES.COM
Unleash Or Suppress AI? The Search For Middle Ground
Can or should AI be constrained?gettyIs it urgent that we step up regulation of artificial intelligence? Thats the view of Geoffrey Hinton, considered the Godfather of AI, who took the opportunity at his recent acceptance of the 2024 Nobel Prize in Physics to sound warnings on the unfettered AI wave sweeping our world. Hes calling on governments to develop stronger regulations to guide AI development and deployment, and companies to fund greater AI safety initiatives, as reported by the University of Toronto.The question is, when it comes to regulating or moderating AI development, how much is too much? What do we need to understand? AI risks are real and need to be addressed, but they are application and use-case specific, Jake Parker, senior director of government relations for the Security Industry Association, told me. Lawmakers should take a technology-neutral approach to regulation and ensure the most stringent requirements apply to truly high-risk applications based not on how AI technologies work but how they are used, Parker urged.There is potential for overreach and unintended consequences in trying to broadly constrain a still-nascent technology, Parker cautioned. Rushing too fast to regulate broadly in such a complex and dynamic field would be a mistake. Getting it right is more important. Overly broad legislation could potentially draw in and limit everyday uses of narrow AI if not carefully crafted.In many ways, this is not the first time we have encountered the question of regulation versus innovation. "I would argue that these concerns are neither unique nor specific to AI these are rather challenges addressed widely by other regulations, said J-M Erlendson, transformation engineering lead at Software AG. Its the specific zeal with which regulators approached the AI problem that is itself an issue."In addition, regulations meant as AI guardrails would likely be more detrimental to smaller businesses and startups than larger establish companies. Most large-scale AI-focused organizations would have shuffled the compliance challenges off to their existing compliance departments, Erlendson pointed out. This would have left the biggest relative burden on the shoulders of startups and early-stage organizations. Meta puts another cost-line on the books, while three-engineers-in-a-garage call it quits I dont think that was anyones intention.MORE FOR YOUStill, there are issues both ethics and practical concerns with AI that need to be acted upon sooner than later. AIs impact on data privacy and the use of data are not matters that can put off until later, said David De Cremer, dean of the DAmore-McKim School of Business at Northeastern University and author of The AI-Savvy Leader.The key is well-targeted efforts and regulations to keep AI safe and fair. Regulations regarding smaller, specialized AI models would make more sense because it is those models that bring the most harm and threats such as models for deepfakes that are creating misinformation, De Cremer said.Erlendson would like to see a greater emphasis on enforcing existing data and intellectual property regulations. That takes, first and foremost, transparency, both on the training data and the connection between training input and model output, he said. Companies need to know the boundaries of development to effectively mitigate the risk of this new technology, while leveraging its many specific benefits in targeted use cases.Liability for the issues or harms that may arise with AI usage is also an open question that needs to be settled. This is particularly visible in the case of copyright protection if the model is learning from - and replicating copyrighted works, transparency can detect and report on violations and provide a framework for legal action, said Erlendson.We need to more precise on defining who is liable when new tech is made available and what their liabilities are, said De Cremer. Who is liable if the AI models violate privacy rights, copyrights and so forth? We need clearer definitions of liability and better guidelines on where the responsibilities lie for developers, businesses transitioning the tech into business and society, and businesses and organizations deploying the tech.There is a self-regulatory aspect at work as well businesses want to avoid the risk of extending too far with AI. They avoid working with AI models that are black boxes and even for AI experts todays LLM models are all black boxes, said De Cremer. This means that most companies mainly use supervised ML models as they do not want to run any risks towards their customers. The most recent AI models are thus not being used in businesses and customers are not being exposed to them either.Ultimately, this self-regulation stems from customers comfort with AI. Concern about AI safety to date has focused on development of newer models in the future, which businesses are not using anyway, he added. It will take some time before that happens. Remember the most advanced AI models are never simply transferred from the lab to the company, there is always significant time between that.
0 Comentários 0 Compartilhamentos 19 Visualizações