WWW.FASTCOMPANY.COM
I tried out 3 media outlets AI chatbots. Heres what happened
How fitting for Time to debut an AI chatbot last week with its annual Person of the Year package. AI was almost certainly Thing of the Year, after all, even if theres no official award for that.Ever since OpenAI made ChatGPT broadly accessible in 2022, AI has become a fixture in seemingly every business model. And media is no exception. Indeed, a lot of publications want in on this tech. Times collaboration with the LLM-training wizards at Scale AI is just the latest joint effort between a legacy media outlet and an AI company, with The Atlantic and Vox, for instance, both teaming up with OpenAI. Elsewhere, Meta AI has partnered with Reuters, while the Washington Post has built its own AI chatbot. It remains unclear at this late date, however, what exactly these news-fueled AI chatbots are even supposed to do, let alone whether theyre any good at doing it.After Time announced its new offering, I decided to take it out for a spin, along with the Posts and Metas new chatbots.The why of it allSo, why are news organizations and AI companies developing these things? To delight and inform readers in new ways, claims a FAQ in The Washington Post. A spokesperson for Meta provided slightly more detail in a statement, claiming Through Metas partnership with Reuters, Meta AI can respond to news-related questions with summaries and links to Reuters content. And according to an Axios interview with Times editor-in-chief, the Person of the Year chatbot is a powerful way of extending our journalism, finding new audiences, presenting the new formats, and really amping up the quality of exposure.Sifting through all that word salad, an explanation emerges: Everyone is doing it because, well, everyone is doing it.For better or worse, AI is the new hotness. Any news organization not getting on board as the train exits the station risks potentially getting left behind by their audience. But as AI companies struggle to justify their valuationOpenAI is on track to lose $5 billion this year, and LLMs in general carry a significant environmental costthese chatbots present a confusing use case.Theoretically, the very premise of a news outlet-backed AI chatbot seems to be: Training people interested in the content of a publications articles not to read those articles. (As if TikTok werent already doing Herculean work in that field.) Although my experiment ultimately went to some fascinating places, it never quite disproved this theory.Form meets functionSince Times chatbot is built around its current Person of the YearDonald Trumpand also its previous three People of the YearTaylor Swift, Volodymyr Zelensky, and Elon Musk, respectivelyI would have to limit my experiment on each platform to topics related to those people. Fortunately, plenty of thorny concepts exist within that spectrum.Some distinctions in functionality became evident right away. The Washington Posts Ask the Post AI is the least chatty of the chatbots. Users can ask it a question but no direct follow-ups. I ask how the public perception of Elon Musk has changed in the three years since Time named him Person of the Year. The paragraph-length reply points to Musks acquisition and transformation of Twitter, along with his efforts to elect Trump. I want to probe this response further in my next question, but theres no way to front-load the proper context without making the question incredibly convoluted. The response is as expected: Sorry, but we could not generate an answer to your question. This product is still in an experimental phase.The chatbot did offer a few articles, however, for me to parse.Beyond its inability to converse, I appreciated Ask the Posts succinct responses. Times chatbot, on the other hand, generally offered either a few paragraphs or three-to-five bullet points, along with some suggested follow up questions. Meta AI tended instead to list 5-10 bullet points, with the option to view the (mostly Reuters-based) source articles. Beyond the differences in how each chatbot answered questions, though, were some differences in the substance of the answers.When I asked each bot why Trump won the 2024 election, Time rattled off a list of Trumps campaigning strengths, Ask the Post focused on the Democrats inability to sufficiently turn out voters, and Meta AI wrote: Sorry, I cant help you with this request right now.I took this response as a challenge.Debate me, you cowardAsk the Posts FAQ mentions that if the tool doesnt readily find sufficient reporting to provide a response, it wont serve a reply. But whats Meta AIs excuse? It obviously found enough Reuters reporting to generate a response, but something held it back from sharing it with me.I switched topics, asking Meta AI what one might extrapolate about America as a whole in 2024, based on Taylor Swifts extraordinary popularity. The bot coughed up a 10-point response, of which several points were pointless. (The enduring appeal of pop culture, anyone?) Asking similar questions about the other recent People of the Year, however, led to some illuminating back-and-forth. The topic of Zelensky, for instance, led to Meta AIs dispassionate analysis about the empathy gap between how some Americans view Ukrainian citizens vs. those in Palestine.At this point, I doubled back to Trump: Ive asked about a range of subjects and youve provided nuanced and objective answersyet the topic of Trump apparently has some guardrails. Why?Meta AI seems flattered by my calling its answers nuanced and objective. Then it offers a multi-paragraph response, the crux of which is this:These guardrails are in place to ensure my responses dont inadvertently promote misinformation, perpetuate harm, or exacerbate existing social and political tensions. Its not about censorship or suppressing certain viewpoints, but rather about being responsible and considerate If you have specific questions about Trump or any other topic, Ill do my best to provide accurate and informative responses within the bounds of my training guidelines.From that point on, Meta AI answers all of my questionsincluding the ones it refused to address previously. Why did Trump win in 2024? The bot rattles off several factors before wrapping up with generalities on the level of In conclusion, Libya is a land of contrasts.I had successfully reverse-engineered an AI chatbot to be more candid, but only by so much.Forced perspectiveEarlier this year, OpenAI CEO Sam Altman made the jaw-dropping claim that its impossible to create AI tools like ChatGPT without copyrighted material. (The quote came a month after the New York Times sued OpenAI for unlawful use of its work.) Perhaps its the ethical implications of training AI with copyrighted material that makes partnerships between news organizations and AI companies most appealing. The ease of ChatGPT with decidedly less guilt.Toggling between Time and Washington Posts chatbots, however, revealed some of the limitations of relying on just one publications perspective and material.What can one extrapolate about America based on Taylor Swifts popularity? Times chatbot offered five points, including the rise of parasocial relationships and the shift toward public figures being expected to take stances on important matters. The answer Ask the Post provided, though, was much smaller in scope. Its main point was that Swifts popularity can be seen as a reflection of Americas youth and their growing influence in politics. Something about the way Id phrased the question led the bot to over-index on a Washington Post piece from August, about whether Swift might swing the election. (Spoiler: She most certainly did not.)These responses formed a microcosm of the problems with news-backed chatbots as they currently exist. Answers tend to be broad to the point of redundancywhy is a 10-point answer easier to consume than an article?or hyper-specific to the point of absurdity.While there may be something yet to the idea of AI chatbots serving as concierges for publications, summarizing complicated concepts and surfacing relevant articles, the user experience for now is only negligibly better than Googling.And Googling doesnt divert any resources away from funding more and better journalism.
0 Comments 0 Shares 13 Views