
Weve outsourced our confirmation biases to search engines
arstechnica.com
Mind expanding Weve outsourced our confirmation biases to search engines Forcing the use of general search terms can help people change their minds. John Timmer Mar 25, 2025 3:41 pm | 12 Credit: Westend61 Credit: Westend61 Story textSizeSmallStandardLargeWidth *StandardWideLinksStandardOrange* Subscribers only Learn morePeople are often quite selective about the information they'll accept, seeking out sources that will confirm their biases, while discounting those that will challenge their beliefs. In theory, search engines can potentially change that. By prioritizing results from high-quality, credible sources, a search engine could ensure that people found accurate information more frequently, potentially opening them to the possibility of updating their beliefs.Obviously, that hasn't worked out on the technology side, as people quickly learned how to game the algorithms used by search engines, meaning that the webpages that get returned have been created by people with no interest in quality or credibility. But a new study is suggesting that the concept fails on the human side, too, as people tend to devise search terms that are specific enough to ensure that the results of the search will end up reinforcing their existing beliefs.The study showed that invisibly swapping search terms to something more general can go a long way toward enabling people to change their mind.Searching for affirmationThe new work was done by two researchers, Eugina Leung at Tulane University and Oleg Urminsky at the University of Chicago. Much of their study focuses on a simple question that people might turn to a search engine to answer: Is caffeine good or bad for you? If you wanted to search for that, you could potentially ask "what are the health effects of caffeine?" which should get you a mixture of the pros and cons. But people could also ask it in less neutral terms, such as, "Is caffeine bad for you?" These more specific searches are likely to pull up a more biased selection of results than the general, neutral terms.Leung and Urminsky did some tests that suggested this was likely to be a real-world problem. Using a Google Adwords planner, they pulled out some of the most common searches that included the word "caffeine" and found that over a quarter of them were narrowly focused and not likely to return a representative spectrum of information on the molecule's effects.Google data also suggests that people do tend to craft search terms that reflect their cognitive biases. "Google Trends data show that the higher the Republican vote share in a state," Leung and Urminsky write, "the more likely Google users in that state were to search 'Trump win' or 'Trump won' compared to searching 'Biden win' or 'Biden won.'"With that in mind, they designed a large series of experiments that looked into how these biases play out within controlled experiments. We won't go into all the details, but the general format of the work was to ask the participants their thoughts on an issue, such as whether caffeine was good or bad for you. The participants were then told to go search for more information, after which their opinions on the topic were checked again.The results, while not always dramatic, consistently pointed in the same direction: If a participant crafted narrow search terms, they were more likely to structure them in a way that should return information that confirms their biases. Or, as the researchers put it, "Experimental participants tended to devise questions that, if answered correctly, would corroborate rather than invalidate their hypothesis." And, not surprisingly, they were more likely to hang onto their original opinion after having been given the chance to look over the results of that search, whether it was provided by Google or GPT 3.5.The topic really didn't matter that much. Leung and Urminsky tested a list that included things like the societal impact of bitcoin to whether gas prices are likely to go up in the future. All of them displayed the same pattern. Again, it was never absolutenarrow searches tended to be 10 to 25 percent more common than general ones. There was simply a tendency to focus searches in a way that would likely reinforce existing beliefs. But that tendency was remarkably consistent.So, the researchers decided to see if they could upend it.Keeping it generalThe simplest way to change the dynamics of this was simply to change the results returned by the search. So, the researchers did a number of experiments where they gave all of the participants the same results, regardless of the search terms they had used. When everybody gets the same results, their opinions after reading them tend to move in the same direction, suggesting that search results can help change people's opinions.The researchers also tried giving everyone the results of a broad, neutral search, regardless of the terms they'd entered. This weakened the probability that beliefs would last through the process of formulating and executing a search. In other words, avoiding the sorts of focused, biased search terms allowed some participants to see information that could change their minds.Despite all the swapping, participants continued to rate the search results relevant. So, providing more general search results even when people were looking for more focused information doesn't seem to harm people's perception of the service. In fact, Leung and Urminsky found that the AI version of Bing search would reformulate narrow questions into more general ones.That said, making this sort of change wouldn't be without risks. There are a lot of subject areas where a search shouldn't return a broad range of informationwhere grabbing a range of ideas would expose people to fringe and false information.Nevertheless, it can't hurt to be aware of how we can use search services to reinforce our biases. So, in the words of Leung and Urminsky, "When search engines provide directionally narrow search results in response to users directionally narrow search terms, the results will reflect the users existing beliefs, instead of promoting belief updating by providing a broad spectrum of related information."PNAS, 2025. DOI: 10.1073/pnas.2408175122 (About DOIs).John TimmerSenior Science EditorJohn TimmerSenior Science Editor John is Ars Technica's science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots. 12 Comments
0 Comments
·0 Shares
·65 Views