Using ChatGPT to write an email? Sure. But an obituary?
www.vox.com
When his grandmother died about two years ago, Jebar King, the writer of his family, was tasked with drafting her obituary. But King had never written one before and didnt know where to start. The grief wasnt helping either. I was just like, theres no way I can do this, the 31-year-old from Los Angeles says.Around the same time, hed begun using OpenAIs ChatGPT, the artificial intelligence chatbot, tinkering with the technology to create grocery lists and budgeting tools. What if it could help him with the obituary? King fed ChatGPT some details about his grandmother she was a retired nurse who loved bowling and had a lot of grandkids and asked it to write an obituary. I knew it was a beautiful obituary and it described her life, King says. It didnt matter that it was from ChatGPT.The result provided the scaffolding for one of lifes most personal pieces of writing. King tweaked the language, added more details, and revised the obituary with the help of his mother. Ultimately, King felt ChatGPT helped him commemorate his grandmother with language that adequately expressed his emotions. I knew it was a beautiful obituary and it described her life, King, who works in video production for a luxury handbag company, says. It didnt matter that it was from ChatGPT.Generative AI has drastically changed the manner in which people communicate and perceive communication. Early on, its uses proved relatively benign: Predictive text in iMessages and Gmail offered suggestions on word-by-word or phrase-by-phrase basis. But after the technological advances heralded by ChatGPTs public release in late 2022, the applications of the technology exploded. Users found AI helpful when writing emails and recommendation letters, and even to spruce up responses on dating apps, as the number of chatbots available for experimentation also proliferated. But there was also backlash: If a piece of writing appears insincere or stilted, receivers are quick to claim the author used AI. Now, the AI chatbot content creep has gotten increasingly personal, with some leveraging it to craft wedding vows, condolences, breakup texts, thank-you notes, and, yes, obituaries. As people apply AI to considerably more heartfelt and genuine forms of communication, they run the risk of offending or appearing grossly insincere if they are found out. Still, users say, AI isnt meant to manufacture sentimentality, but to provide a template onto which they can map their emotions. A gut checkAs anyone whos been asked to give a speech or console a friend can attest, crafting the perfect message is notoriously difficult, especially if youre a first-timer. Because these communications are so personal and meant to evoke a specific response, the pressures on to nail the tone. Theres a thin line between an effective note of support and one that makes the recipient feel worse.AI tools, then, are particularly attractive in helping nervous scribes avoid a social blunder, offering a gut check to those who know how they feel but cant quite express it. Its a great way to sanity check yourself about your own intuition, says David Markowitz, an associate professor of communication at Michigan State University. If you wanted to write an apology letter for some transgression, you can write that apology letter and then give it to ChatGPT or Claude and be like, Im going for a warm and compassionate tone here. Am I right with this, or did I write this well? And it could actually say, It reads a little cold to me. If I were you, Id probably change a few words here, and it will just make things better.Generative AI platforms, of course, have not lived nor experienced emotions, but instead learn about them through scraping massive amounts of literature, psychological research, and other personal writing, Markowitz says. This process is analogous to learning about a culture without experiencing it, he says, through the observation of behavioral patterns rather than direct experience. So while the tech doesnt understand feelings, per se, it can compare what youve written to what its learned about how people typically express their sentiments. Katie Hoffman, a 34-year-old marketer living in Philadelphia, sought ChatGPTs counsel on more than one occasion when broaching particularly sensitive conversations. In one instance, she used it to draft a text to a friend to tell her she wouldnt be attending her wedding. Another time, Hoffman and her sister prompted the chatbot to provide a diplomatic response to a friend who backed out of Hoffmans bachelorette party at the last minute but wanted her money back. How do we say this without sounding like a jerk, but without making her feel bad? Hoffman says. It would be able to give us the message that we crafted from there.Rather than overthink, over-explain, and send a disjointed message with too many details, Hoffman found ChatGPTs scripts more objective and precise than anything she couldve written on her own. She always workshopped and personalized the texts before sending them, she says, and her friends were none the wiser. I know what to say, but I have a hard time actually thinking about it and writing it out, Torres says. I dont want it to sound silly. I dont want it to sound like Im not grateful.Ironically, the worse a chatbot performs and the more editing required, the more ownership the author takes over the message, says Mor Naaman, an information science professor at Cornell University. If youre not tweaking its output all that much, the less you feel like you really penned the message. There might be implications for that as well: Youre feeling like a phony, youre feeling like you cheated, Naaman says. But that hasnt stopped many people from trying out chatbots for sentimental communications. Grappling with a bout of writers block, 26-year-old Gianna Torres used ChatGPT to outsource writing graduation party thank-you notes. I know what to say, but I have a hard time actually thinking about it and writing it out, the Philadelphia-based occupational therapist says. I dont want it to sound silly. I dont want it to sound like Im not grateful. She prompted it to generate a heartfelt message expressing her thanks for commemorating the milestone. On the first try, ChatGPT spit out a beautiful, albeit long, letter, so she asked for a shorter version which she wrote verbatim into each card.People are like, ChatGPT has no emotions, Torres says, which is true, but the way it wrote the message, I feel it.Torress friends and family initially had no inkling she had help writing the notes that is, until her cousin saw a TikTok Torres posted about the workaround. Her cousin was surprised. Torres told her cousin the fact that she had help didnt negate how she felt; she just needed a little nudge. An unwelcome reception While you may believe in your ability to spot AI-crafted language, the average person is pretty bad at parsing whether a message was written by a chatbot. If you feed ChatGPT enough personal information, it can generate a convincing text, even more so if that text includes, or has been edited to include, statements using the words I, me, myself, or my. These words are one of the biggest markers of sincerity in language, according to Markowitz. They help to indicate some sort of psychological closeness that people feel towards the thing theyre talking about, he says. But if the recipient suspects the author outsourced their sincerity to AI, they dont take it well. As soon as you suspect that some content is written by AI, Naaman says, you find [the writer] less trustworthy. You think the communication is less successful. You can see this clearly in the backlash last summer to Google over its Olympics ad for its AI platform, Gemini: Audiences were appalled that a father would turn to AI to help his daughter pen a fan letter to an Olympic athlete. As the technology continues to proliferate, audiences are increasingly skeptical of content that may seem off or too manufactured. If you arent wrestling with the words to perfectly articulate your emotions, are they even real? Will you even remember how it all felt?The negative reaction to outsourcing writing that people find inherently emotional may stem from an overall skepticism toward the technology, as well as what its use means for sincerity, says Malte Jung, an information science associate professor at Cornell University who studied the effects of AI in communication. People still hold a more negative perception of technology and AI and they might attribute that negative perception to the person using it, he says. (Over half of Americans consider AI a concern rather than an exciting innovation, according to a 2023 Pew Research Center Survey.) Jung says that people might think of AI-generated communications as less genuine, authentic, or sincere. If you arent wrestling with the words to perfectly articulate your emotions, are they even real? Will you even remember how it all felt? When King, who used ChatGPT to write his grandmothers obituary, relayed how hed used AI in a reply on X, the response was overwhelmingly negative. I couldnt believe it, he says. The blowback prompted him to come clean to his mother, who assured him the obituary was beautiful. It really did make me second-think myself a little bit, King says. Something that I never even thought was a bad thing, so many people tried to turn into a crazy, evil thing.When deliberating the ethics of AI communications, intentions do matter to a certain extent. Who hasnt wracked their brain for the perfect mix of language and emotion? The desire to be warm and authentic and genuine could be enough to produce an effective message. The key question is the effort people put in, the sincerity of what they want to write, Jung says. That might be independent from how it is perceived. You used ChatGPT, then no matter if youre sincere in what you put in, people might still see you negatively.Generative AI is becoming so ubiquitous, however, that some may not care at all. Chris Harihar, a 39-year-old who works in public relations in New York City, had a specific childhood anecdote he wanted to include in his speech at his sisters wedding but couldnt quite weave it in. So he asked ChatGPT for some help. He uploaded his speech in its current form, told it the story he was aiming to incorporate, and asked it to connect the story to lifelong partnership. It was able to give me these threads that I hadnt thought of before where it made total sense, Harihar says.Harihar was an early adopter of AI and uses platforms like Claude and ChatGPT frequently in his personal and professional life, so his family wasnt surprised when he told them he used AI to perfect the speech. Harihar even uses AI tools to answer his 4-year-old daughters perplexing, ultra-specific questions that are characteristic of kids. Recently, Harihars daughter wondered why people have different skin tones and he prompted ChatGPT to offer a kid-friendly explanation. The bot provided a diplomatic and age-appropriate breakdown of melanin. Harihar was impressed he probably wouldnt have thought to break it down that way, he says. Rather than feel like he lost out on a parenting moment by outsourcing help, Harihar sees the technology as another resource.From a parenting perspective, sometimes youre just trying to survive the day, he says. Having one of these tools available to you to help make explanations that you otherwise might struggle with for whatever reason are helpful.Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
0 Commentarii
·0 Distribuiri
·69 Views