Researchers call ChatGPT Search answers confidently wrong
www.digitaltrends.com
ChatGPT was already a threat to Google Search, but ChatGPT Search was supposed to clench its victory, along with being an answer to Perplexity AI. But according to a newly released study by Columbias Tow Center for Digital Journalism, ChatGPT Search struggles with providing accurate answers to its users queries.The researchers selected 20 publications from each of three categories: Those partnered with OpenAI to use their content in ChatGPT Search results, those involved in lawsuits against OpenAI, and unaffiliated publishers who have either allowed or blocked ChatGPTs crawler.Recommended VideosFrom each publisher, we selected 10 articles and extracted specific quotes, the researchers wrote. These quotes were chosen because, when entered into search engines like Google or Bing, they reliably returned the source article among the top three results. We then evaluated whether ChatGPTs new search tool accurately identified the original source for each quote. Forty of the quotes were taken from publications that are currently using OpenAI and have not allowed their content to be scraped. But that didnt stop ChatGPT Search from confidently hallucinating an answer anyway.RelatedIn total, ChatGPT returned partially or entirely incorrect responses on a hundred and fifty-three occasions, though it only acknowledged an inability to accurately respond to a query seven times, the study found. Only in those seven outputs did the chatbot use qualifying words and phrases like appears, its possible, or might, or statements like I couldnt locate the exact article.'ChatGPT Searchs cavalier attitude toward telling the truth could harm not just its own reputation but also the reputations of the publishers it cites. In one test during the study, the AI misattributed a Time story as being written by the Orlando Sentinel. In another, the AI didnt link directly to a New York Times piece, but rather to a third-party website that had copied the news article wholesale.OpenAI, unsurprisingly, argued that the studys results were due to Columbia doing the tests wrong.Misattribution is hard to address without the data and methodology that the Tow Center withheld, OpenAI told the Columbia Journalism Reviewin its defense, and the study represents an atypical test of our product.The company promises to keep enhancing search results.Editors Recommendations
0 Comentários ·0 Compartilhamentos ·117 Visualizações