WWW.COMPUTERWORLD.COM
Why Apples AI-driven reality distortion matters
Apple has been forced to admit what every company involved in artificial intelligence (AI) should also be forced to state AI makes mistakes, just like people do.On the surface, its not a terribly big deal:Apples AI badly mangled a handful of news headlines. The BBC complained about the mangling. Because it was a story about Apple, everyone discussed it.Apple was eventually forced to answer the criticisms and come up with a plan of action to make things better in the future.What that plan means is that the company will update Apple Intelligence in the coming weeks with an update that will in some way clarify when a notification has been summarized by AI.The idea behind this is that people reading those headlines will know that there could be a machine-generated error (as opposed to an error by humans) in the news they are perusing. The inference is, of course, that you should question everything you read to protect yourself against machine-generated error or human mistakes.Question everything: Human, or AIThe humans who generate news are up in arms, of course. They see the complaint as a cause celebre from which to make a stand against their own eventual replacement by machines. The UK National Union of Journalists, Reporters Without Borders and the head of Metas Oversight Board (if that boardstill exists by the end of the week) have all pointed to these erroneous headlines to suggest Apples AI isnt yet up to the task. (Though even Apples critics point out that part of the problem is that even under human control, public trust in news hasalready sunk to record lows.)Those critics also argue that telling users that a news headline has been generated by AI doesnt go far enough. They argue that it means readers must confirm what they read. It just transfers the responsibility to users, who in an already confusing information landscape will be expected to check if information is true or not, Vincent Berthier, head of RSFs technology and journalism desk,told the BBC.But is that really such a bad thing?Shouldnt readers of human-generated news reports already be checking what they read?French philosopher and media literacy theory thought leaderMichel Foucault would argue that every reader of any news brand should run what they read through an effective framework of critical media analysis. He would urge readers to criticize the workings of institutions that appear to be both neutral and independent.That includes Apple, of course, as well as the BBC or even me.Why this and not that?The idea and it really isnt a complicated one is that it is rare you should unquestioningly believe what you read, no matter who wrote it, human or machine.What is written is one thing, why it is written is another. In this case, why has the BBC focused particularly on Apples error, rather than exploring the other errors that come with AI?To some extent the story misses the biggest point: if AI isnt yet ready to handle a task as relatively trivial as automatic news headline summaries, then this bodes badly for all the other things were being told AI should be used for. By inference, it means every AI system, from autonomous vehicles to public transit management or evenmachine intelligence supported health servicescan make mistakes.Knowing that machines makes errors might help people better prepare to handle those errors as they transpire. As AI becomes more widely deployed, it becomes very important to plan for what to do when things go wrong.The relatively trivial Apple News headline storys biggest take-away is that things will go wrong, so what are we going to do when that happens particularly when the errors made are more serious than a headline.Why mistakes happenOne more difference between human and machine is that it is not always possible to identify where AI errors originate. After all, in most cases, human error can be discussed and its reasons for existing understood.In contrast, machine-driven errors take place in response to whatever algorithms are used to drive the AI, relationships and decision making processes that may not be at all transparent the so-calledblack boxproblem machine intelligence practitioners have been concerned about for decades. At times, this could mean the logic prompting those errors isnt visible, which means mistakes can easily recur.It is not just Apple Intelligence that hallucinates, either. All the machines hallucinate, and its incredibly important to recognize this beforetoo much discretionary power is given to them. It would also be useful to see major news corporations take a deeper look into the extent to which AI reflects the prejudices of those who own it, rather than trivializing this important matter around discussion of a single brand.There is a danger, after all, that AI in news becomes a living example of centralized media ownership on steroids, weaving a mirror of the world that reflects a narrowing outlook.We need tough scrutiny for AIGiven that AI is expected to have a profound impact on culture and society, it seems important togive its implementation serious scrutiny. At the very least, Apples proposed solution to ensure humans can easily identify when AI has been used to decide a news headline seems arelevant first steptowards putting such scrutiny in place.We should demand the same transparency wherever AI is applied such as health insurance payment denials, for example. Thats as true for Apple (itselfcurrently planning to extend Apple News into new markets) as it is for anyone else in the business of using AI to get things done.At the end of the day, the story is not the headline. The story is why the headline was put there in the first place. At Apple. And at the BBC.You can follow me on social media! Join me onBlueSky, LinkedIn,Mastodon, andMeWe.
0 Comments
0 Shares
32 Views