Googles latest genAI shift is a reminder to IT leaders never trust vendor policy
www.computerworld.com
Every enterprise CIO knows they cannot and should not ever trust a vendors policy position. Whether thats because a vendor might not strictly adhere to its policies or can change policies anytimewithout notice, it doesnt matter.Googles move last week toback away from assurancesit would not help make weapons or engage in surveillance was utterly unsurprising. Companies are motivated by revenue, profits and market share and if corporate leaders can improve any of those financial metrics by helping to make weapons of mass destruction or helping a government poison its people thats what can happen.But enterprise CIOs are the customers customers with big budgets that give them major clout. If companies want your dollars, they must agree to whatever you have in your RFP and your contract.Why would these massive vendors agree? Because they fear that one of their competitors will do so if they dont. That could cost them market share and revenue.Suddenly, you have their C-suites rapt attention.As for Google in this case, what was the original language the company felt it needed to avoid?Last years statement gave a list of AI applications we will not pursue.This is part of that list: Technologies that cause or are likely to cause overall harm. Where there isa material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints. Weapons or other technologies whose principal purpose or implementationis to cause or directly facilitate injury to people. Technologies that gather or use information for surveillance violating internationally accepted norms. Technologies whose purpose contravenes widely accepted principles of international law and human rights.Then, in an eerily predictive point, it added: As our experience in this space deepens, this list may evolve.It did evolve. It got a lot shorter.If a lot of money can be made doing those things, Google now says, in effect, Human suffering and death and maiming can be trumped by higher profits and marketshare. Ethics, morality and humanity dont keep the lights on, buddy!Youll also notice that the company has bagged its Dont be evil tagline; Google apparently ditched it 10 years ago. Maybe they could update it now to something like this: Google. Where we never let avoiding evil stand in the way of making a profit.I was recently discussing this issue with two executives at Phoenix Technologies, a Swiss cloud provider. They made the argument that enterprise CIOs shouldnt rely on vendor promises, especially for large language model (LLM) making, including how theyre trained and used.If you are reliant on the model makers and their terms and conditions state that they can service anybody, you have to be willing to deal with the fallout, said Peter DeMeo, the Phoenix group chief product officer. You really cant trust the model makers, especially when they need revenue from government contracts.His colleague, Phoenix group CTO Nunez Mencias, applauded Google for removing the restriction, given that it was unlikely it could ever be relied on. The model makers can always change their policies, their rules.But theres a big difference between being unable to rely on a vendors self-stated rules and being powerless to discourage AI use in areas your company might not be comfortable with.Just remember: Entities out there doing things you dont like are always going tobe able to get generative AI (genAI) services and tools from somebody. You think large terrorist cells cant use their money to pay somebody to craft LLMs for them?Even the most powerful enterprises cant stop it from happening. But, that may not be the point. Walmart, ExxonMobil, Amazon, Chase, Hilton, Pfizer and Toyota and the rest of those heavy-hitters merely want to pick and choose where their monies are spent.Big enterprises cant stop AI from being used to do things they dont like, but they can make sure none of it is being funded with their money.If they add a clause to every RFP that they will only work with model-makers that agree to not do X, Y, or Z, that will get a lot of attention. The contract would have to be realistic, though. It might say, for instance, If the model-maker later chooses to accept payments for the above-described prohibited acts, they must reimburse all of the dollars we have already paid and must also give us 18 months notice so that we can replace the vendor with a company that will respect the terms of our contracts.From the perspective of Google, along with Microsoft, OpenAI, IBM, AWS and others, the idea is to take enterprise dollarson top ofgovernment contracts. If they were to believe thats suddenly an either/or scenario, they might suddenly reconsider.Given that Google has decided that revenue is more important than morality, the answer is not to appeal to their morality. If money is all they care about, speak that language.Fortunately for enterprises, there are plenty of large companies willing to handle your genAI needs. Perhaps now is the time to use your buying power to influence who else they work with and limit what they do.
0 التعليقات ·0 المشاركات ·40 مشاهدة