WWW.FASTCOMPANY.COM
Its 2025. Lets stop grading generative AI on a curve
Compared to other tech giants, Apples approach to generative AI is strikingly measured. Its new Apple Intelligence errs on the side of sidestepping functionality that could go awry or be misused: Image Playground, for example, produces only cheerfully synthetic Pixar-esque graphics, not anything that might be mistaken for a photograph.But even Apple can do only so much to tamp down AIs tendency to run wild. Another Apple Intelligence feature uses the technology to summarize notifications. Last month, its prcis of a BBC article about Luigi Mangione, charged with having murdered United Healthcare CEO Brian Thompson, mistakenly stated that Mangione had shot himself. Understandably, the BBC was not pleased. Reacting to both that botched summary and an earlier one inaccurately claiming that Israel Prime Minister Benjamin Netanyahu had been arrested, a journalists group said that Apple should just ditch the notification summaries, period.Its dystopian that a new AI featurefrom Apple of all companieshas spewed misinformation about some of the worlds most-watched news stories. And thats before you get to the fact that the Apple Intelligence summaries, even when accurate, often read like they were written by an alien only vaguely familiar with the ways of human beings. Given that theyre compressing items that were usually pretty concise in the first place, its just not clear that the summaries are a net positive for users of Apples platforms. I cant imagine the company will kill the feature, but maybe it should have held off releasing it until it worked better.Across the tech business, the race to implement AI has led companies to ship functionality they know to be raw and erraticnot because its buggy in any traditional sense, but because unpredictability is baked into the system. For an industry thats accustomed to working in ones and zeroesmaking quality control literally binarythats a sea change. Thirty years ago, Intels Pentium processor turned out to have a glitch that could result it to divide numbers incorrectly. The chances of the defect affecting any given calculation were apparently one in nine billion, which initially led to Intel downplaying the flaw. After the chipmaker realized its blitheness was a bad look, it apologized and spent millions to provide debugged Pentiums to PC owners who requested them.With todays AI, flakiness is so core to the experience that some of my conversations with Anthropics Claude are dominated by its self-aware acknowledgments of its limitations. In response to one recent query, it helpfully explained, I should note that while I try to be accurate, I may hallucinate details when answering questions about very obscure topics like this. Thats better than the maniacally overconfident personas of other chatbots, but its still a stopgap, not a solution.Its true that the current versions of Claude, ChatGPT, and their rivals are less prone to getting stuff laughably wrong than their predecessors did. But they remain very good at weaving plausible-sounding inaccuracies into mostly true material. Computer graphics long faced an uncanny valley problem, in which realistic animated people that were 90% convincing tended to leave viewers fixated on the 10% that was imperfect, such as eyes that lacked a convincingly human glint. With AI, by contrast, theres a canny valleythe percentage of material it generates thats faulty, but not obviously so. Its a more pernicious issue than hallucinations that are easy to spot, and it isnt going away anytime soon.Now, Im aware that I often use this newsletter to carp about AIs imperfections. I swear Im not a Luddite. Im fully capable of being dazzled by AI-infused features, and I dont think they need to attain perfection to be extraordinarily useful. For instance, this weeks newsletter implements a formatting tweak that I couldnt quite figure out on my own. Claude handled most of the coding work with only general instructions from me. Even factoring in the time I spent wrapping up the job, it felt like a miracle. (Id previously tried ChatGPT and found its advice unworkable.)As tempting as it is to cut an infinite amount of slack for something as astonishing as generative AI, the best computing breakthroughs dont require any special dispensation. Arthur C. Clarke famously said that any sufficiently developed technology is indistinguishable from magic. With all due respect, the opposite may be true: We know that a new technology is sufficiently developed when its so dependable that we regard it as mundane, not magical. When was the last time you found yourself awed by the internet? Or electricity, or air travel?If 2025 is the year that generative AIs novelty fades away, it might force tech companies to hold it to the same standards as all the older, more familiar technologies at their disposal. That coming-of-age cant arrive soon enoughand in its own unglamorous way, it would be a giant leap forward.Read/Watch/Listen/TryAt Fast Company, Jared Newmans roundups of the years best new appsboth brand-new ones and meaty upgradesare a holiday tradition. Heres his list for 2024. It inspired me to recommend three additional productivity apps. None of them are all that well-known, but theyre as essential to my work as any of the big-name ones.Bear. After ditching Evernote in 2023, Ive tried an infinite number of note-taking apps. Bearavailable for Macs, iPhones, and iPadsis the one that comes closest to thinking the way I do. Its interface is beautifully minimalist, and Ive bonded with its use of hashtags as an organizational tool. Even the Pro version is an absurdly reasonable $30 a year.Reclaim.ai. This web-based app helps ensure that I actually complete the tasks on my to-do list by intelligently slotting them into available space on my Google Calendar, where theyre impossible to ignore. There are paid plans, but the free version offers everything I need and more. Until just now, I somehow missed that Reclaim was acquired by Dropbox in August; heres hoping that company doesnt mess too much with a good thing.Focus. Back in 2015, I learnedfrom a Fast Company article, naturally!about a productivity technique called Pomodoro. It involves dividing your workday into 25-minute chunks, with brief breaks in between if you need them. I continue to find it a boon to my efficiency, and my favorite way to manage it is this elegant timer app for Macs, iPhones, iPads, and Apple Watchesnot to be confused with a bunch of other timers also called Focus.Youve been reading Plugged In,Fast Companys weekly tech newsletter from me, global technology editor Harry McCracken. If a friend or colleague forwarded this edition to youor if youre reading it on FastCompany.comyou cancheck out previous issues and sign up to get it yourself every Wednesday morning. I love hearing from you: Ping me athmccracken@fastcompany.comwith your feedback and ideas for future newsletters. Im also happy to hear from you on Bluesky, Mastodon, or Threads.More top tech stories from Fast CompanyAI vaporware: 7 products that didnt materialize in 2024GPT-5, an Alexa overhaul, Googles Project Astra, and more.Read MoreGot an Apple computer for Christmas? Here are 6 apps and games to try with that new MacPixelmator Pro, Prince of Persia, The WereCleaner, and other fun ways to test out what that new machine can do.Read MoreAn ex-OpenAI exec and futurist talks about AI in 2025 and beyondAI could bring some real benefits next year, but also some unforeseen side effects.Read MoreYouTube has a new plan to combat clickbaitThe video platform will take action against content that features clickbait titles and thumbnails, particularly those tied to breaking news or current events.Read More45 years ago, the Walkman changed how we listen to musicIn 1979, Sony premiered the Walkman to limited fanfare. Now, the entire music tech market has formed around it.Read MoreHow to put down your phone in 2025A quick guide to achieving your New Years resolution of getting offline.Read More
0 Comments
0 Shares
27 Views