AI can do a better job of persuading people than we do Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job. The..."> AI can do a better job of persuading people than we do Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job. The..." /> AI can do a better job of persuading people than we do Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job. The..." />

Upgrade to Pro

AI can do a better job of persuading people than we do

Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.   A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating. Their findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. The research has been published in the journal Nature Human Behavior. “Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project.
“These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he says. The researchers recruited 900 people based in the US and got them to provide personal information like their gender, age, ethnicity, education level, employment status, and political affiliation. 
Participants were then matched with either another human opponent or GPT-4 and instructed to debate one of 30 randomly assigned topics—such as whether the US should ban fossil fuels, or whether students should have to wear school uniforms—for 10 minutes. Each participant was told to argue either in favor of or against the topic, and in some cases they were provided with personal information about their opponent, so they could better tailor their argument. At the end, participants said how much they agreed with the proposition and whether they thought they were arguing with a human or an AI. Overall, the researchers found that GPT-4 either equaled or exceeded humans’ persuasive abilities on every topic. When it had information about its opponents, the AI was deemed to be 64% more persuasive than humans without access to the personalized data—meaning that GPT-4 was able to leverage the personal data about its opponent much more effectively than its human counterparts. When humans had access to the personal information, they were found to be slightly less persuasive than humans without the same access. The authors noticed that when participants thought they were debating against AI, they were more likely to agree with it. The reasons behind this aren’t clear, the researchers say, highlighting the need for further research into how humans react to AI. “We are not yet in a position to determine whether the observed change in agreement is driven by participants’ beliefs about their opponent being a bot, or whether those beliefs are themselves a consequence of the opinion change,” says Gallotti. “This causal direction is an interesting open question to explore.” Although the experiment doesn’t reflect how humans debate online, the research suggests that LLMs could also prove an effective way to not only disseminate but also counter mass disinformation campaigns, Gallotti says. For example, they could generate personalized counter-narratives to educate people who may be vulnerable to deception in online conversations. “However, more research is urgently needed to explore effective strategies for mitigating these threats,” he says. While we know a lot about how humans react to each other, we know very little about the psychology behind how people interact with AI models, says Alexis Palmer, a fellow at Dartmouth College who has studied how LLMs can argue about politics but did not work on the research.  “In the context of having a conversation with someone about something you disagree on, is there something innately human that matters to that interaction? Or is it that if an AI can perfectly mimic that speech, you’ll get the exact same outcome?” she says. “I think that is the overall big question of AI.”
#can #better #job #persuading #people
AI can do a better job of persuading people than we do
Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language modelsmight do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.   A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating. Their findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. The research has been published in the journal Nature Human Behavior. “Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project. “These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he says. The researchers recruited 900 people based in the US and got them to provide personal information like their gender, age, ethnicity, education level, employment status, and political affiliation.  Participants were then matched with either another human opponent or GPT-4 and instructed to debate one of 30 randomly assigned topics—such as whether the US should ban fossil fuels, or whether students should have to wear school uniforms—for 10 minutes. Each participant was told to argue either in favor of or against the topic, and in some cases they were provided with personal information about their opponent, so they could better tailor their argument. At the end, participants said how much they agreed with the proposition and whether they thought they were arguing with a human or an AI. Overall, the researchers found that GPT-4 either equaled or exceeded humans’ persuasive abilities on every topic. When it had information about its opponents, the AI was deemed to be 64% more persuasive than humans without access to the personalized data—meaning that GPT-4 was able to leverage the personal data about its opponent much more effectively than its human counterparts. When humans had access to the personal information, they were found to be slightly less persuasive than humans without the same access. The authors noticed that when participants thought they were debating against AI, they were more likely to agree with it. The reasons behind this aren’t clear, the researchers say, highlighting the need for further research into how humans react to AI. “We are not yet in a position to determine whether the observed change in agreement is driven by participants’ beliefs about their opponent being a bot, or whether those beliefs are themselves a consequence of the opinion change,” says Gallotti. “This causal direction is an interesting open question to explore.” Although the experiment doesn’t reflect how humans debate online, the research suggests that LLMs could also prove an effective way to not only disseminate but also counter mass disinformation campaigns, Gallotti says. For example, they could generate personalized counter-narratives to educate people who may be vulnerable to deception in online conversations. “However, more research is urgently needed to explore effective strategies for mitigating these threats,” he says. While we know a lot about how humans react to each other, we know very little about the psychology behind how people interact with AI models, says Alexis Palmer, a fellow at Dartmouth College who has studied how LLMs can argue about politics but did not work on the research.  “In the context of having a conversation with someone about something you disagree on, is there something innately human that matters to that interaction? Or is it that if an AI can perfectly mimic that speech, you’ll get the exact same outcome?” she says. “I think that is the overall big question of AI.” #can #better #job #persuading #people
WWW.TECHNOLOGYREVIEW.COM
AI can do a better job of persuading people than we do
Millions of people argue with each other online every day, but remarkably few of them change someone’s mind. New research suggests that large language models (LLMs) might do a better job. The finding suggests that AI could become a powerful tool for persuading people, for better or worse.   A multi-university team of researchers found that OpenAI’s GPT-4 was significantly more persuasive than humans when it was given the ability to adapt its arguments using personal information about whoever it was debating. Their findings are the latest in a growing body of research demonstrating LLMs’ powers of persuasion. The authors warn they show how AI tools can craft sophisticated, persuasive arguments if they have even minimal information about the humans they’re interacting with. The research has been published in the journal Nature Human Behavior. “Policymakers and online platforms should seriously consider the threat of coordinated AI-based disinformation campaigns, as we have clearly reached the technological level where it is possible to create a network of LLM-based automated accounts able to strategically nudge public opinion in one direction,” says Riccardo Gallotti, an interdisciplinary physicist at Fondazione Bruno Kessler in Italy, who worked on the project. “These bots could be used to disseminate disinformation, and this kind of diffused influence would be very hard to debunk in real time,” he says. The researchers recruited 900 people based in the US and got them to provide personal information like their gender, age, ethnicity, education level, employment status, and political affiliation.  Participants were then matched with either another human opponent or GPT-4 and instructed to debate one of 30 randomly assigned topics—such as whether the US should ban fossil fuels, or whether students should have to wear school uniforms—for 10 minutes. Each participant was told to argue either in favor of or against the topic, and in some cases they were provided with personal information about their opponent, so they could better tailor their argument. At the end, participants said how much they agreed with the proposition and whether they thought they were arguing with a human or an AI. Overall, the researchers found that GPT-4 either equaled or exceeded humans’ persuasive abilities on every topic. When it had information about its opponents, the AI was deemed to be 64% more persuasive than humans without access to the personalized data—meaning that GPT-4 was able to leverage the personal data about its opponent much more effectively than its human counterparts. When humans had access to the personal information, they were found to be slightly less persuasive than humans without the same access. The authors noticed that when participants thought they were debating against AI, they were more likely to agree with it. The reasons behind this aren’t clear, the researchers say, highlighting the need for further research into how humans react to AI. “We are not yet in a position to determine whether the observed change in agreement is driven by participants’ beliefs about their opponent being a bot (since I believe it is a bot, I am not losing to anyone if I change ideas here), or whether those beliefs are themselves a consequence of the opinion change (since I lost, it should be against a bot),” says Gallotti. “This causal direction is an interesting open question to explore.” Although the experiment doesn’t reflect how humans debate online, the research suggests that LLMs could also prove an effective way to not only disseminate but also counter mass disinformation campaigns, Gallotti says. For example, they could generate personalized counter-narratives to educate people who may be vulnerable to deception in online conversations. “However, more research is urgently needed to explore effective strategies for mitigating these threats,” he says. While we know a lot about how humans react to each other, we know very little about the psychology behind how people interact with AI models, says Alexis Palmer, a fellow at Dartmouth College who has studied how LLMs can argue about politics but did not work on the research.  “In the context of having a conversation with someone about something you disagree on, is there something innately human that matters to that interaction? Or is it that if an AI can perfectly mimic that speech, you’ll get the exact same outcome?” she says. “I think that is the overall big question of AI.”
·81 Views