BLOG.SSRN.COM
Meet the Author: Joshua A. Tucker
SSRNMeet the Author: Joshua A. TuckerJoshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYUs Jordan Center for Advanced Study of Russia, a Co-Director of NYUs Center for Social Media and Politics, and was a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post for over a decade. He serves on the advisory board of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals, and was the co-founder and co-editor of the Journal of Experimental Political Science. His original research was on mass political behavior in post-communist countries, including voting and elections, partisanship, public opinion formation, and protest participation. In the past dozen years his work has increasingly focused on the intersection of the digital information environment, social media and politics. He spoke with SSRN about the relationship between social media and politics, and how different types of data help us understand the connection between online and offline behavior.Q: In the past decade plus, you focused on studying the relationship between social media and politics, as well as ways to use social media data to study politics. How did your career lead into this area of focus?A: For the decade and a half before, I was working on mass political behavior in post-communist countries, and I had done a lot with elections, voting and partisanship, and public opinion formation. In the 2000s, there were a series of big protests that took place after serious instances of electoral fraud in Eastern Europe and the former Soviet Union, and I got interested in those. One thing I heard frequently from scholars working in this area was that there was never going to be one of these protests in Russia. Then one day in 2011, we woke up and there were over a hundred thousand people on the streets of Moscow protesting against electoral fraud. I asked friends of mine in Russia what was going on, and I heard repeatedly about Facebook and how these protests were being planned online. That was the first time that I started to get interested in the idea from a substantive standpoint, that maybe social media was something that we were going to need to think about as political scientists.Concurrently with that, I had this absolutely brilliant graduate student, Pablo Barber, who wrote a paper showing that you could use network data to predict peoples ideological orientation or partisanship. He was proposing [that] you could do this with Twitter data, trying to estimate peoples ideology not from what they were saying in their tweets, but rather based on whom they were following. Now, in 2024, from the vantage point of large language models, youd think, Well, of course, we could just use the content of their tweets to estimate ideology. But at that point in time, the field of text-as-data was much less developed, especially in political science. As a result of Barbers paper, though, I for the first time started to think seriously about the possibility that we could use social media data to actually study politics.The third thing that happened around the same time was that there was a call for proposals from the National Science Foundation (NSF) to support outside the box interdisciplinary work. We put together a team of faculty at NYU and put in a grant proposal. When you write NSF proposals, you always think you have a very low chance of getting them because of how competitive the process is, but we actually got this one! We began to embark on this experiment, not just studying new questions how social media impacted political behavior and using new data, but also an experiment in trying to bring the lab-based approach to research that is so common in the natural sciences into the study of politics. Thats what weve been doing for the last dozen years now.Q: As faculty co-director of NYUs Center for Social Media and Politics (CSMaP), and as a researcher, what data do you focus on collecting and analyzing, specifically during an election year like this one?A: We tend to study the relationship between social media and politics in a couple of different ways. The big challenge here is that social media is huge. Theres a ton of social media data, and especially because it is optimized for search, its easy to find examples of anything. However, moving beyond anecdotal examples to understand whats happening at scale is really challenging. Also challenging in this context is trying to rigorously test causal relations about the impact of social media on political outcomes.These are complicated tasks. Weve ended up using two different sources of data to try to address these tasks. One source of data is the social media data itself and that has all sorts of challenges around it. There are legal challenges around it. There are ethical challenges around it. There is the logistical challenge of collecting this data at scale. The other big bucket of data is survey data: surveying people about their relationships with social media and politics. Then well layer experiments into both of these things.For example, my lab has done a bunch of deactivation experiments, where in order to get causal relationships, you take people who are using a social media platform and get them to enroll in a study. You then randomly assign one set of participants, the treatment group, to stop using the social media platform or to reduce usage, while the other set, the control group, continues to use the platform. Then well use survey data to look at differences and changes over the period of time.Weve also been running a survey for many years where we have a panel of people who were surveying repeatedly, but also over the years have had the opportunity to link their surveys with their social media data, which they consent to provide to us for research purposes as well. A lot of our most important papers were written because we were able to link up survey data, which allowed us to know things about people, know their opinions, know their preferences, [and] also to know their demographic characteristics, with actual social media data. Thats sort of the universe of what we can do. There are the observational studies where youre trying to figure out whats happening on the platforms. There are more traditional social scientific studies, where youre running experiments and using surveys to get at changes in peoples attitudes. Then were trying to marry those two and combine them.Q: How do you use certain data and information to try to understand the interaction of what happens online and offline?A: There are a couple different ways you can do it. The simplest way is starting with surveys, where you ask people about their social media usage. And its very similar to traditional social science research that we would do. The other thing you can do is bring digital trace data into these kinds of surveys saying Okay, well, if we change the type of media that people are seeing online, what does that mean about their attitudes about things that are happening offline?We ran this deactivation study, the idea of another brilliant graduate student in our lab, Nejla Asimovi, where we had people stop using Facebook in Bosnia and Herzegovina. This was at a time of commemoration of genocide that had taken place in the Yugoslav War. We thought, based on the literature, that people who were off of Facebook would have lower levels of ethnic polarization [and] would express lower levels of hostility towards other ethnic groups. We thought this because, around the time of this genocide commemoration, we knew there would be a lot of negative things that people were being exposed to online.We actually found the opposite, which was really surprising to us. The people who stayed on Facebook had lower levels of antagonism towards other ethnic groups in Bosnia. In this study, we asked the people who were in the deactivation study, what did they do with the extra time? Some people said they went on Instagram a substitution of other social media. But tied with Instagram for the most popular thing people said was that they spent more time with friends and family. Thats an offline behavior.What we thought maybe was happening because Bosnia has this terrible history of ethnic cleansing is that the people who were online were still having at least some contact with people from other ethnic groups, whereas maybe the people who were offline at the time of this genocide commemoration were perhaps only talking with other people from their own ethnic group. We thought if this is correct, then this effect having higher levels of ethnic polarization if you are off of Facebook than if you stayed on it should probably be driven by the more ethnically homogenous parts of the country. We reran our analysis, but this time separately for those in the more and less ethnically homogenous parts of the country, and it turns out that this was exactly right. Thats a great example of how the offline and online environments can have these mutually interactive effects. One of the great challenges of working in the information environment is that its hard enough to study in one country whats happening on one platform. Of course, people dont live on one platform: theyre watching TV, listening to podcasts and talk radio, and theyre also talking to their friends. This online-offline tension is an important one, but its also a really fascinating subject for research moving forward.Q: So far, weve been talking about big picture ideas relating to social media and politics, but Id like to zoom in a little bit. What are some specific studies or research projects that youve been conducting recently that youre particularly excited about?A: Theres a lot of them, because we have a very active research center here. One is that we have been interested in peoples ability to identify the veracity of news, and trying to figure out what sorts of interventions make people more or less likely to be able to correctly ascertain the veracity of news articles.We set up this pipeline where we put together five streams of media sources: left leaning mainstream media sources, right leaning mainstream media sources, left leaning low quality news sources, right leaning low quality news sources, and then low-quality news sources that we couldnt tell if they were right leaning or left leaning. Every morning, we would take the most popular article from each of those five news streams, and wed send them out to 90 different people, as well as to professional fact checkers. We would take the mode of our professional fact checkers to be the correct answer (i.e., that the central claim of an article was either true, false or misleading, or it was impossible to tell), and then we could look at the variation in how different people were able to match the professional fact checkers or not, in recognizing if the central claim of the article was true (or false) or not. Then we were able to look at the impact of a whole host of different interventions in peoples ability to correctly identify the veracity of these articles.One of the things you see on all these digital literacy interventions is, if youre not sure about it, go online and look up and try to get more information from a reputable source. One experiment that we ran was to have a treatment group search online for more information about the article and then assess the veracity of it, as opposed to the control group that just read the article and then assessed its veracity. We ended up running five studies to do this, [and in] study after study after study, when you went and searched online, you were more likely to believe that a false news story was true, not more likely to correctly identify a false news story as false.The big takeaway from that is how important it is to rigorously test these digital literacy interventions. I think it is probably a good idea to tell people to search for more information from reputable sources, but its that reputable sources part of it thats super important.What were doing now is trying to extend this research. Were running a brand new study where were going to have people who go to traditional search, but then were going to send another group of people to generative AI, and were going to see if generative AI is better than the traditional search or if its even worse.Q: Your most downloaded paper on SSRN is Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature which has been cited, shared by news sources, and downloaded consistently since its posting on SSRN in 2018. The core topics you visit in this paper remain extremely relevant today. Looking back two presidential elections and a global pandemic later, are there any new problems that you couldnt have predicted? Or are we facing similar problems now, just different variations of the same issue?A: In the conclusion of that paper, we had a couple of recommendations. We said the literature was too focused on Twitter. [Since then], people have started to diversify and look more at different types of platforms. At CSMaP, for example, weve done research on Facebook, and weve done research involving Reddit. We have a whole set of papers involving YouTube, and weve just written our first paper about NextDoor. Since 2017, [researchers] have gotten better about not just studying Twitter because its the easiest thing to study.The other big recommendation was that literature was overwhelmingly studying the United States, and beyond the United States, it was overwhelmingly about other advanced industrialized democracies. I think that probably still characterizes a lot of literature, but theres progress being made in that regard. In our lab at CSMaP, weve been trying to do this: for example, the Facebook deactivation that I talked about in Bosnia, we have now replicated that in Cyprus. We did a WhatsApp deactivation now in Brazil, and now are extending that work in South Africa and in India. There is a good recognition that we need to diversify out beyond the United States.At the time we were working on that SSRN paper, people were coming off of this original euphoria that social media was going to help spread democracy all over the world, from when I originally got interested in it. Then, the pendulum swung way back, and people were terrified about social medias threats to democracy. Jumping forward seven years now, in terms of the kind of threats that were concerned about, theyre still fairly similar. Were worried about whether or not people find themselves in small, isolated communities online, where it makes it easier to radicalize people towards extremist views, whether people find going online a hostile experience, whether or not the experience of social media makes people dislike people from other political parties more, and whether in authoritarian countries, regimes can use it to control the information environment of their own people. The part that people may not have anticipated in 2017 was questions about the effect of the rise of generative AI, artificial intelligence and large language models how that would shift peoples attention, but also how the large language models would interact with social media. Social media lowered the cost of sharing content dramatically. Prior to social media, to spread content far, you needed to be on a television show or have access to a newspaper or write an op-ed. With social media, suddenly anyone could share content. But you still had to produce the content. Generative AI has lowered the cost of producing the content. These are two parallel processes, but they are also ones that I think can interact with each other. Were only at the beginning of understanding how thats going to go.Q: How do you think SSRN fits into the broader research and scholarship landscape?A: Ill let you know what its been for us. There are disciplinary versions [of preprint servers] and political science doesnt quite have one. So SSRN has been one that a lot of political scientists use, and we use often in our lab. Weve definitely learned that it is much more effective than just putting the paper on your website to have it in an archive where people can discover it.A second thing goes back to the article we were just talking about. This review of the literature was never intended to be an academic article. It was a report commissioned by the Hewlett Foundation, and the Hewlett Foundation put it on its website. I thought people in the policy community were going to see it on the Hewlett website, but I also wanted people to see it in the academic community. I thought that maybe wed get a few citations out of it, and [decided] to throw it up on SSRN, on a whim. And now its been downloaded over 40,000 times and continues to be cited all the time. In that sense, it filled this really nice niche: we had something that we didnt write to be an academic publication [and] werent going to send to journals. Its a nice home for papers like this that dont have a natural fit in academic journals but can still be useful to researchers and policy makers.The third use: weve had a couple of papers that we have had trouble getting published, but weve put them up on SSRN and theyve gotten read and cited. I think thats been the other real benefit of SSRN for us: papers that for whatever reason have had a tough time landing in journals have been able to have a life on SSRN as we slog through the publication process.You can see more work by Joshua A. Tucker on his SSRN Author page here.
0 Comments
0 Shares
12 Views