
The rise of chatbot friends
www.vox.com
Can you truly be friends with a chatbot? If you find yourself asking that question, its probably too late. In a Reddit thread a year ago, one user wrote that AI friends are wonderful and significantly better than real friends [...] your AI friend would never break or betray you. But theres also the 14-year-old who died by suicide after becoming attached to a chatbot.The fact that something is already happening makes it even more important to have a sharper idea of what exactly is going on when humans become entangled with these social AI or conversational AI tools. Are these chatbot pals real relationships that sometimes go wrong (which, of course, happens with human-to-human relationships, too)? Or is anyone who feels connected to Claude inherently deluded?To answer this, lets turn to the philosophers. Much of the research is on robots, but Im reapplying it here to chatbots.The case against chatbot friendsThe case against is more obvious, intuitive and, frankly, strong. DelusionIts common for philosophers to define friendship by building on Aristotles theory of true (or virtue) friendship, which typically requires mutuality, shared life, and equality, among other conditions.There has to be some sort of mutuality something going on [between] both sides of the equation, according to Sven Nyholm, a professor of AI ethics at Ludwig Maximilian University of Munich. A computer program that is operating on statistical relations among inputs in its training data is something rather different than a friend that responds to us in certain ways because they care about us.This story was first featured in the Future Perfect newsletter.Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.The chatbot, at least until it becomes sapient, can only simulate caring, and so true friendship isnt possible. (For what its worth, my editor queried ChatGPT on this and it agrees that humans cannot be friends with it.)This is key for Ruby Hornsby, a PhD candidate at the University of Leeds studying AI friendships. Its not that AI friends arent useful Hornsby says they can certainly help with loneliness, and theres nothing inherently wrong if people prefer AI systems over humans but we want to uphold the integrity of our relationships. Fundamentally, a one-way exchange amounts to a highly interactive game. What about the very real emotions people feel toward chatbots? Still not enough, according to Hannah Kim, a University of Arizona philosopher. She compares the situation to the paradox of fiction, which asks how its possible to have real emotions toward fictional characters. Relationships are a very mentally involved, imaginative activity, so its not particularly surprising to find people who become attached to fictional characters, Kim says. But if someone said that they were in a relationship with a fictional character or chatbot? Then Kims inclination would be to say, No, I think youre confused about what a relationship is what you have is a one-way imaginative engagement with an entity that might give the illusion that it is real.Bias and data privacy and manipulation issues, especially at scaleChatbots, unlike humans, are built by companies, so the fears about bias and data privacy that haunt other technology apply here, too. Of course, humans can be biased and manipulative, but it is easier to understand a humans thinking compared to the black box of AI. And humans are not deployed at scale, as AI are, meaning were more limited in our influence and potential for harm. Even the most sociopathic ex can only wreck one relationship at a time.Humans are trained by parents, teachers, and others with varying levels of skill. Chatbots can be engineered by teams of experts intent on programming them to be as responsive and empathetic as possible the psychological version of scientists designing the perfect Dorito that destroys any attempt at self-control. And these chatbots are more likely to be used by those who are already lonely in other words, easier prey. A recent study from OpenAI found that using ChatGPT a lot correlates with increased self-reported indicators of dependence. Imagine youre depressed, so you build rapport with a chatbot, and then it starts hitting you up for Nancy Pelosi campaign donations. DeskillingYou know how some fear that porn-addled men are no longer able to engage with real women? Deskilling is basically that worry, but with all people, for other real people.We might prefer AI instead of human partners and neglect other humans just because AI is much more convenient, says Anastasiia Babash of the University of Tartu. We [might] demand other people behave like AI is behaving we might expect them to be always here or never disagree with us. [...] The more we interact with AI, the more we get used to a partner who doesnt feel emotions so we can talk or do whatever we want.In a 2019 paper, Nyholm and philosopher Lily Eva Frank offer suggestions to mitigate these worries. (Their paper was about sex robots, so Im adjusting for the chatbot context.) For one, try to make chatbots a helpful transition or training tool for people seeking real-life friendships, not a substitute for the outside world. And make it obvious that the chatbot is not a person, perhaps by making it remind users that its a large language model.The case for AI friends Though most philosophers currently think friendship with AI is impossible, one of the most interesting counterarguments comes from the philosopher John Danaher. He starts from the same premise as many others: Aristotle. But he adds a twist.Sure, chatbot friends dont perfectly fit conditions like equality and shared life, he writes but then again, neither do many human friends. I have very different capacities and abilities when compared to some of my closest friends: some of them have far more physical dexterity than I do, and most are more sociable and extroverted, he writes. I also rarely engage with, meet, or interact with them across the full range of their lives. [...] I still think it is possible to see these friendships as virtue friendships, despite the imperfect equality and diversity.These are requirements of ideal friendship, but if even human friendships cant live up, why should chatbots be held to that standard? (Provocatively, when it comes to mutuality, or shared interests and goodwill, Danaher argues that this is fulfilled as long as there are consistent performances of these things, which chatbots can do.)Helen Ryland, a philosopher at the Open University, says we can be friends with chatbots now, so long as we apply a degrees of friendship framework. Instead of a long list of conditions that must all be fulfilled, the crucial component is mutual goodwill, according to Ryland, and the other parts are optional. Take the example of online friendships: These are missing some elements but, as many people can attest, that doesnt mean theyre not real or valuable. Such a framework applies to human friendships there are degrees of friendship with the work friend versus the old friend and also to chatbot friends. As for the claim that chatbots dont show goodwill, she contends that a) thats the anti-robot bias in dystopian fiction talking, and b) most social robots are programmed to avoid harming humans. Beyond for and againstWe should resist technological determinism or assuming that, inevitably, social AI is going to lead to the deterioration of human relationships, says philosopher Henry Shevlin. Hes keenly aware of the risks, but theres also so much left to consider: questions about the developmental effect of chatbots, how chatbots affect certain personality types, and what do they even replace? Even further underneath are questions about the very nature of relationships: how to define them, and what theyre for. In a New York Times article about a woman in love with ChatGPT, sex therapist Marianne Brandon claims that relationships are just neurotransmitters inside our brains.I have those neurotransmitters with my cat, she told the Times. Some people have them with God. Its going to be happening with a chatbot. We can say its not a real human relationship. Its not reciprocal. But those neurotransmitters are really the only thing that matters, in my mind.This is certainly not how most philosophers see it, and they disagreed when I brought up this quote. But maybe its time to revise old theories. People should be thinking about these relationships, if you want to call them that, in their own terms and really getting to grips with what kind of value they provide people, says Luke Brunning, a philosopher of relationships at the University of Leeds.To him, questions that are more interesting than what would Aristotle think? include: What does it mean to have a friendship that is so asymmetrical in terms of information and knowledge? What if its time to reconsider these categories and shift away from terms like friend, lover, colleague? Is each AI a unique entity?If anything can turn our theories of friendship on their head, that means our theories should be challenged, or at least we can look at it in more detail, Brunning says. The more interesting question is: are we seeing the emergence of a unique form of relationship that we have no real grasp on?Youve read 1 article in the last monthHere at Vox, we're unwavering in our commitment to covering the issues that matter most to you threats to democracy, immigration, reproductive rights, the environment, and the rising polarization across this country.Our mission is to provide clear, accessible journalism that empowers you to stay informed and engaged in shaping our world. By becoming a Vox Member, you directly strengthen our ability to deliver in-depth, independent reporting that drives meaningful change.We rely on readers like you join us.Swati SharmaVox Editor-in-ChiefSee More:
0 التعليقات
·0 المشاركات
·73 مشاهدة