0 Comments
0 Shares
1 Views
Directory
Directory
-
Please log in to like, share and comment!
-
WWW.NATURE.COMAudio long read: How a silly science prize changed my careerNature, Published online: 27 December 2024; doi:10.1038/d41586-024-03970-6The Ig Nobel prizes are famed for their spotlighting of offbeat research. Nature investigates how some winners feel about their honour.0 Comments 0 Shares 1 Views
-
WWW.NATURE.COM'CAR T cells': a festive parody song from the <i>Nature Podcast</i>Nature, Published online: 25 December 2024; doi:10.1038/d41586-024-04238-9The Nature Podcast team have rewritten a popular holiday song in light of one of the biggest science trends of 2024.0 Comments 0 Shares 1 Views
-
WWW.NATURE.COMThe <i>Nature Podcast</i> highlights of 2024Nature, Published online: 25 December 2024; doi:10.1038/d41586-024-03981-3The team select some of their favourite stories from the past 12 months.0 Comments 0 Shares 1 Views
-
-
-
BLOG.SSRN.COMMeet the Author: Joshua A. TuckerSSRNMeet the Author: Joshua A. TuckerJoshua A. Tucker is Professor of Politics, affiliated Professor of Russian and Slavic Studies, and affiliated Professor of Data Science at New York University. He is the Director of NYUs Jordan Center for Advanced Study of Russia, a Co-Director of NYUs Center for Social Media and Politics, and was a co-author/editor of the award-winning politics and policy blog The Monkey Cage at The Washington Post for over a decade. He serves on the advisory board of the American National Election Study, the Comparative Study of Electoral Systems, and numerous academic journals, and was the co-founder and co-editor of the Journal of Experimental Political Science. His original research was on mass political behavior in post-communist countries, including voting and elections, partisanship, public opinion formation, and protest participation. In the past dozen years his work has increasingly focused on the intersection of the digital information environment, social media and politics. He spoke with SSRN about the relationship between social media and politics, and how different types of data help us understand the connection between online and offline behavior.Q: In the past decade plus, you focused on studying the relationship between social media and politics, as well as ways to use social media data to study politics. How did your career lead into this area of focus?A: For the decade and a half before, I was working on mass political behavior in post-communist countries, and I had done a lot with elections, voting and partisanship, and public opinion formation. In the 2000s, there were a series of big protests that took place after serious instances of electoral fraud in Eastern Europe and the former Soviet Union, and I got interested in those. One thing I heard frequently from scholars working in this area was that there was never going to be one of these protests in Russia. Then one day in 2011, we woke up and there were over a hundred thousand people on the streets of Moscow protesting against electoral fraud. I asked friends of mine in Russia what was going on, and I heard repeatedly about Facebook and how these protests were being planned online. That was the first time that I started to get interested in the idea from a substantive standpoint, that maybe social media was something that we were going to need to think about as political scientists.Concurrently with that, I had this absolutely brilliant graduate student, Pablo Barber, who wrote a paper showing that you could use network data to predict peoples ideological orientation or partisanship. He was proposing [that] you could do this with Twitter data, trying to estimate peoples ideology not from what they were saying in their tweets, but rather based on whom they were following. Now, in 2024, from the vantage point of large language models, youd think, Well, of course, we could just use the content of their tweets to estimate ideology. But at that point in time, the field of text-as-data was much less developed, especially in political science. As a result of Barbers paper, though, I for the first time started to think seriously about the possibility that we could use social media data to actually study politics.The third thing that happened around the same time was that there was a call for proposals from the National Science Foundation (NSF) to support outside the box interdisciplinary work. We put together a team of faculty at NYU and put in a grant proposal. When you write NSF proposals, you always think you have a very low chance of getting them because of how competitive the process is, but we actually got this one! We began to embark on this experiment, not just studying new questions how social media impacted political behavior and using new data, but also an experiment in trying to bring the lab-based approach to research that is so common in the natural sciences into the study of politics. Thats what weve been doing for the last dozen years now.Q: As faculty co-director of NYUs Center for Social Media and Politics (CSMaP), and as a researcher, what data do you focus on collecting and analyzing, specifically during an election year like this one?A: We tend to study the relationship between social media and politics in a couple of different ways. The big challenge here is that social media is huge. Theres a ton of social media data, and especially because it is optimized for search, its easy to find examples of anything. However, moving beyond anecdotal examples to understand whats happening at scale is really challenging. Also challenging in this context is trying to rigorously test causal relations about the impact of social media on political outcomes.These are complicated tasks. Weve ended up using two different sources of data to try to address these tasks. One source of data is the social media data itself and that has all sorts of challenges around it. There are legal challenges around it. There are ethical challenges around it. There is the logistical challenge of collecting this data at scale. The other big bucket of data is survey data: surveying people about their relationships with social media and politics. Then well layer experiments into both of these things.For example, my lab has done a bunch of deactivation experiments, where in order to get causal relationships, you take people who are using a social media platform and get them to enroll in a study. You then randomly assign one set of participants, the treatment group, to stop using the social media platform or to reduce usage, while the other set, the control group, continues to use the platform. Then well use survey data to look at differences and changes over the period of time.Weve also been running a survey for many years where we have a panel of people who were surveying repeatedly, but also over the years have had the opportunity to link their surveys with their social media data, which they consent to provide to us for research purposes as well. A lot of our most important papers were written because we were able to link up survey data, which allowed us to know things about people, know their opinions, know their preferences, [and] also to know their demographic characteristics, with actual social media data. Thats sort of the universe of what we can do. There are the observational studies where youre trying to figure out whats happening on the platforms. There are more traditional social scientific studies, where youre running experiments and using surveys to get at changes in peoples attitudes. Then were trying to marry those two and combine them.Q: How do you use certain data and information to try to understand the interaction of what happens online and offline?A: There are a couple different ways you can do it. The simplest way is starting with surveys, where you ask people about their social media usage. And its very similar to traditional social science research that we would do. The other thing you can do is bring digital trace data into these kinds of surveys saying Okay, well, if we change the type of media that people are seeing online, what does that mean about their attitudes about things that are happening offline?We ran this deactivation study, the idea of another brilliant graduate student in our lab, Nejla Asimovi, where we had people stop using Facebook in Bosnia and Herzegovina. This was at a time of commemoration of genocide that had taken place in the Yugoslav War. We thought, based on the literature, that people who were off of Facebook would have lower levels of ethnic polarization [and] would express lower levels of hostility towards other ethnic groups. We thought this because, around the time of this genocide commemoration, we knew there would be a lot of negative things that people were being exposed to online.We actually found the opposite, which was really surprising to us. The people who stayed on Facebook had lower levels of antagonism towards other ethnic groups in Bosnia. In this study, we asked the people who were in the deactivation study, what did they do with the extra time? Some people said they went on Instagram a substitution of other social media. But tied with Instagram for the most popular thing people said was that they spent more time with friends and family. Thats an offline behavior.What we thought maybe was happening because Bosnia has this terrible history of ethnic cleansing is that the people who were online were still having at least some contact with people from other ethnic groups, whereas maybe the people who were offline at the time of this genocide commemoration were perhaps only talking with other people from their own ethnic group. We thought if this is correct, then this effect having higher levels of ethnic polarization if you are off of Facebook than if you stayed on it should probably be driven by the more ethnically homogenous parts of the country. We reran our analysis, but this time separately for those in the more and less ethnically homogenous parts of the country, and it turns out that this was exactly right. Thats a great example of how the offline and online environments can have these mutually interactive effects. One of the great challenges of working in the information environment is that its hard enough to study in one country whats happening on one platform. Of course, people dont live on one platform: theyre watching TV, listening to podcasts and talk radio, and theyre also talking to their friends. This online-offline tension is an important one, but its also a really fascinating subject for research moving forward.Q: So far, weve been talking about big picture ideas relating to social media and politics, but Id like to zoom in a little bit. What are some specific studies or research projects that youve been conducting recently that youre particularly excited about?A: Theres a lot of them, because we have a very active research center here. One is that we have been interested in peoples ability to identify the veracity of news, and trying to figure out what sorts of interventions make people more or less likely to be able to correctly ascertain the veracity of news articles.We set up this pipeline where we put together five streams of media sources: left leaning mainstream media sources, right leaning mainstream media sources, left leaning low quality news sources, right leaning low quality news sources, and then low-quality news sources that we couldnt tell if they were right leaning or left leaning. Every morning, we would take the most popular article from each of those five news streams, and wed send them out to 90 different people, as well as to professional fact checkers. We would take the mode of our professional fact checkers to be the correct answer (i.e., that the central claim of an article was either true, false or misleading, or it was impossible to tell), and then we could look at the variation in how different people were able to match the professional fact checkers or not, in recognizing if the central claim of the article was true (or false) or not. Then we were able to look at the impact of a whole host of different interventions in peoples ability to correctly identify the veracity of these articles.One of the things you see on all these digital literacy interventions is, if youre not sure about it, go online and look up and try to get more information from a reputable source. One experiment that we ran was to have a treatment group search online for more information about the article and then assess the veracity of it, as opposed to the control group that just read the article and then assessed its veracity. We ended up running five studies to do this, [and in] study after study after study, when you went and searched online, you were more likely to believe that a false news story was true, not more likely to correctly identify a false news story as false.The big takeaway from that is how important it is to rigorously test these digital literacy interventions. I think it is probably a good idea to tell people to search for more information from reputable sources, but its that reputable sources part of it thats super important.What were doing now is trying to extend this research. Were running a brand new study where were going to have people who go to traditional search, but then were going to send another group of people to generative AI, and were going to see if generative AI is better than the traditional search or if its even worse.Q: Your most downloaded paper on SSRN is Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature which has been cited, shared by news sources, and downloaded consistently since its posting on SSRN in 2018. The core topics you visit in this paper remain extremely relevant today. Looking back two presidential elections and a global pandemic later, are there any new problems that you couldnt have predicted? Or are we facing similar problems now, just different variations of the same issue?A: In the conclusion of that paper, we had a couple of recommendations. We said the literature was too focused on Twitter. [Since then], people have started to diversify and look more at different types of platforms. At CSMaP, for example, weve done research on Facebook, and weve done research involving Reddit. We have a whole set of papers involving YouTube, and weve just written our first paper about NextDoor. Since 2017, [researchers] have gotten better about not just studying Twitter because its the easiest thing to study.The other big recommendation was that literature was overwhelmingly studying the United States, and beyond the United States, it was overwhelmingly about other advanced industrialized democracies. I think that probably still characterizes a lot of literature, but theres progress being made in that regard. In our lab at CSMaP, weve been trying to do this: for example, the Facebook deactivation that I talked about in Bosnia, we have now replicated that in Cyprus. We did a WhatsApp deactivation now in Brazil, and now are extending that work in South Africa and in India. There is a good recognition that we need to diversify out beyond the United States.At the time we were working on that SSRN paper, people were coming off of this original euphoria that social media was going to help spread democracy all over the world, from when I originally got interested in it. Then, the pendulum swung way back, and people were terrified about social medias threats to democracy. Jumping forward seven years now, in terms of the kind of threats that were concerned about, theyre still fairly similar. Were worried about whether or not people find themselves in small, isolated communities online, where it makes it easier to radicalize people towards extremist views, whether people find going online a hostile experience, whether or not the experience of social media makes people dislike people from other political parties more, and whether in authoritarian countries, regimes can use it to control the information environment of their own people. The part that people may not have anticipated in 2017 was questions about the effect of the rise of generative AI, artificial intelligence and large language models how that would shift peoples attention, but also how the large language models would interact with social media. Social media lowered the cost of sharing content dramatically. Prior to social media, to spread content far, you needed to be on a television show or have access to a newspaper or write an op-ed. With social media, suddenly anyone could share content. But you still had to produce the content. Generative AI has lowered the cost of producing the content. These are two parallel processes, but they are also ones that I think can interact with each other. Were only at the beginning of understanding how thats going to go.Q: How do you think SSRN fits into the broader research and scholarship landscape?A: Ill let you know what its been for us. There are disciplinary versions [of preprint servers] and political science doesnt quite have one. So SSRN has been one that a lot of political scientists use, and we use often in our lab. Weve definitely learned that it is much more effective than just putting the paper on your website to have it in an archive where people can discover it.A second thing goes back to the article we were just talking about. This review of the literature was never intended to be an academic article. It was a report commissioned by the Hewlett Foundation, and the Hewlett Foundation put it on its website. I thought people in the policy community were going to see it on the Hewlett website, but I also wanted people to see it in the academic community. I thought that maybe wed get a few citations out of it, and [decided] to throw it up on SSRN, on a whim. And now its been downloaded over 40,000 times and continues to be cited all the time. In that sense, it filled this really nice niche: we had something that we didnt write to be an academic publication [and] werent going to send to journals. Its a nice home for papers like this that dont have a natural fit in academic journals but can still be useful to researchers and policy makers.The third use: weve had a couple of papers that we have had trouble getting published, but weve put them up on SSRN and theyve gotten read and cited. I think thats been the other real benefit of SSRN for us: papers that for whatever reason have had a tough time landing in journals have been able to have a life on SSRN as we slog through the publication process.You can see more work by Joshua A. Tucker on his SSRN Author page here.0 Comments 0 Shares 1 Views
-
BLOG.SSRN.COMMeet the Author: Sara GerkeSSRNMeet the Author: Sara GerkeSara Gerke is an Associate Professor of Law and Richard W. & Marie L. Corman Scholar at the University of Illinois College of Law. Her research focuses on the ethical and legal challenges of artificial intelligence and big data for health care and health law in the United States and Europe. She also researches comparative law and ethics of other issues at the cutting edge of medical developments, biological products, reproductive medicine, and digital health more generally. Professor Gerke has over 60 publications in health law and bioethics, and her work has appeared in leading law, medical, scientific, and bioethics journals. She is leading several research projects, including CLASSICA (Validating AI in Classifying Cancer in Real-Time Surgery) and OperA (Optimizing Colorectal Cancer Prevention Through Personalized Treatment With Artificial Intelligence), both funded by the European Union. She spoke with SSRN about the legal and ethical challenges of integrating artificial intelligence into the medical field and how much farther we still have to go.Q: Youve led and been part of research projects that work on things like artificial intelligence (AI) in healthcare and the legal and ethical implications of integrating new and cutting-edge technology into medical practice. What has driven your interest and motivation within this field of study?A: I moved to the U.S. about six years ago. Previously, I was the General Manager of the Institute for German, European and International Medical Law, Public Health Law and Bioethics of the Universities of Heidelberg and Mannheim, and I was also doing a lot of grant writing. I already felt like we were getting to the point where we have to submit applications for technology, because thats the new world, where its heading towards. It takes quite some time until you actually can execute a grant. I somehow got to know about this project [in the U.S.], and I was really excited about it. It would have taken probably another two years until I could carry out something similar in Germany. Long term, I always thought to pursue an international career so that I can write more articles in English and really spread my work around the world. I applied for that position, was very lucky to get it, and was responsible for the day-to-day work of the Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL) at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. I feel these days everyone is very aware of AI, at the latest when ChatGPT came on the market. But six years ago, we were one of the very small groups who really looked at AI and digital technology at that time in the healthcare field from an ethical but also legal perspective.Q: Your most downloaded paper on SSRN, Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare, written in 2020, highlights several main concerns regarding AI in healthcare, both in the legal and ethical sense. In the four years since you wrote this, which legal or ethical problems do you think have made the most progress?A: I dont think we made enough progress in those years since I wrote that paper. This paper has been cited so much because we were essentially the first who wrote a paper on this topic. At that point, it didnt have much interest yet. But what we are seeing, comparatively because I do a lot of comparative law between the EU and the U.S. on the regulatory front is a lot has happened since then. The Food and Drug Administration (FDA) has thought a lot about how to regulate AI and machine learning (ML)-based medical devices. Also, we have seen in the EU just recently the AI Act, the worldwide first regulatory framework for AI.In the last probably two years now, people are very aware of questions of liability and liability risk when introducing those tools into the healthcare system and into the hospitals. The FDA has already authorized over 950 AI/ML-based medical devices. And frankly, not all of the AI tools are considered or classified as medical devices. We have a lot of in-house developments and other particular clinical decision tools that fall outside of FDA regulation. Those tools [have] already been implemented in healthcare. In recent years, these questions of whos going to be held liable if something goes wrong? are much more pressing.Q: How did working on this research early on inform what you do now?A: I feel very fortunate that I have been working in this field for such a long time now to build my expertise. I can now adapt to new challenges relatively easily because I have a long experience of going deep into the legal and ethical issues of AI. Im doing a lot of interdisciplinary work. I work with physicians, with engineers, and so I can always get new ideas from those collaborations. Every day you read something [new] about AI: its hard to keep up at that pace. So, some groundwork of knowledge is really helpful to be able to catch up on all the new developments happening every single day.Q: Youve done a lot of work regarding medical AI applications used for self-diagnosing and the importance of labeling these apps properly. Talk a little bit about why its important to educate people about the function and purpose of self-diagnosing AI apps like these.A: Im still doing a lot of work on the questions of labeling, because I think its so important. Labeling is just one small piece of this entire puzzle. I am a believer in labeling AI, in particular in the healthcare field. Imagine two different types of tools. You can first imagine a tool which is being used by a physician in a hospital. The label is directed to the healthcare provider and [those] who need information about the tool to assess whether it makes sense to use that tool safely on their patients.On the other hand, we have a lot of direct-to-consumer AI tools, including apps. Here, they also need to get some type of information. This information might potentially be a little different in detail. A consumer might care about other things like privacy, if this is safe to use, or if data is properly protected while a physician might need to get more granular information about if this dataset [has] been properly tested and breakdowns of race, ethnicity, location, age, gender, etc. That information is really important for physicians to know, and unfortunately, we dont have proper standards in place at this point which require manufacturers to disclose that. The Office of the National Coordinator for Health IT (ONC) has recently put out a rule and requires some type of transparency and some kind of nutrition label. [Its] really nice to see that regulators are now finally doing something in regard to labeling, but its just a tiny amount of tools that are being covered by that rule. Im still waiting and hoping for the FDA to change that. At some point, hopefully all AI tools have some type of label, so that users are getting proper information. Sometimes one gets the criticism, well, no one reads labels, but its better to have that information as an option to read it, if you want to. I think there is a necessity to require that disclosure.Q: Youve pointed out in your work that while many of these apps might be labeled as information only tools rather than actual diagnoses, people sometimes misperceive them as real diagnoses. Do you see this misperception as an example or indicator of some sort of larger issue with how medical AI tools are understood by consumers?A: The issue that we see in some of these apps is that they for example, the electrocardiogram app from Apple are actually information tools and not [meant] to make a diagnosis. But if people are using those tools for [something] like skin cancer screenings, they usually tend to believe whatever the AI tool gives them. [They] might just skip a doctor visit because they are scanning with their app, and [think], oh, it seems I dont have any cancer. Everything looks good, so I dont need to necessarily go to the doctor right now. We see a lot of empirical data on that, that the consumers are perceiving those tools as diagnostic tools. But if one looks at the language, all of those manufacturers are clearly articulating that a direct-to-consumer tool, often available without any prescription, doesnt replace going to the doctor. I think we need to have measures [and] more education around it. The majority of health AI apps or general health apps are not being reviewed by the FDA: they fall outside of the Federal Food, Drug, and Cosmetic Act. So, it is hard for consumers to assess whether this app is reliable or not, because consumers usually just put into the App Store what they are looking [for], and whatever pops up first, they likely are going to download.Q: AI has developed very quickly in the past few years, and it has raised problems that nobody really had to think about before. Because of this, theres a retroactive response to certain problems. How do you think researchers can begin to anticipate where the field is headed, to ensure that AI is used responsibly in healthcare?A: I personally dont have a crystal ball, and I think thats the problem. Because honestly, if you would have asked people maybe three [or] four years ago, I dont think they would have suggested that wed come [this] far so quickly. We have these generative AI tools that have incredible capacities but also raise a lot of new issues which we have not anticipated. Once you get it out of the box, its hard to get it back into the box.Thats a problem now, because retroactively making laws around this is really challenging, and we are not seeing right now in the U.S. that thats going to necessarily change. I think the approach in the U.S. is going to be more like a mosaic style. We have different regulators, and everyone is going to do some stuff in their wheelhouse. Hopefully, there will be enough collaboration and understanding that its going to be a mosaic or puzzle to be completed, rather than overlapping and making it much more complicated for stakeholders to understand and oversee.Q: Are there any papers, projects, or research you are working on right now that youre particularly excited about?A: I always have a lot of projects going on, because AI keeps me busy. I have several research projects Im involved in. Im leading two projects in the ethics and legal field, which are called CLASSICA and OperA, and they are funded by the European Union. Ive also been one of the PIs (principal investigators) of an National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the National Institutes of Health Office of the Director (NIH OD) grant on technology. There are in particular in the two projects funded by the European Union clinical trials being carried out. One is for colon cancer prevention. AI tools are being tested to see if they are going to be helpful in the long run. Similarly, [in] the CLASSICA project, thats a project on AI-assisted surgery. The surgeons are testing an AI tool that can predict whether tissue is benign or malignant in real-time during a surgery.Im not involved in the clinical trials, but for me, its really exciting, because my team is looking at some of the legal and ethical issues, such as, do surgeons have any reluctancy to implement such a tool and use it in the operation room? [and] are there any liability risks that they are worried about?My work keeps me busy in the liability space, but also in regulation and the questions of, how should regulators in the U.S. and Europe regulate more complex tools like generative AI? Thats going to be a real challenge.There are many questions, and in the liability space in particular, the more sophisticated the AI tool becomes, the more interesting and unsettled the question is. How does tort law deal with an AI tool that practices medicine? An AI tool is not considered a legal person at this point. So at some point in the future, you might have an AI tool that is so sophisticated [that] its a standard of care and is implemented in a hospital. It might be that you cant find any human fault in the physician using the AI tool: the physician needs to rely on the output of the AI tool because its so complex. And then theres this question of, if harm occurs, whos going to be held liable if you cant find human fault in the physician? Because it was totally fine of the physician to rely on the AI tool in the first place. These are questions which we need to tackle in the future, once such AI is going to be implemented in the healthcare field.Q: How long do some of these clinical trials take?A: The ones carried out in the projects I am involved in last for several years. These are long-term projects spanning four to five years, and I think they are [some] of the rare occasions where clinical trials are actually being carried out in the AI field. The majority of AI tools, in fact, especially in the U.S., have not undergone any clinical trial studies. How the Federal Food, Drug, and Cosmetic Act works is that there is a pathway, which is called a 510(k) pathway. What you need to show as a manufacturer is that your device is substantially equivalent to another legally marketed device, which usually does not require any clinical evidence. We have seen that the majority of the AI-based medical devices that were authorized by the FDA went through the 510(k), so in most cases, there is no necessity to show any clinical data. And so thats what we are seeing. Clinical trials are rare in the field. But, of course, clinical trials should not hamper innovation. Its hard, because if you have a so-called adaptive AI tool that continuously learns, you could have a clinical trial, but how much of the results of the clinical trial data will still help you in the long run, [given that] the tool is constantly changing and adapting? If we are getting fully adaptive systems its going to be even more difficult to make sure that they stay safe and effective. You will probably need to have an ongoing monitoring system in place to be able to tackle that issue.Q: There are many people that are still wary of the idea of AI being used in medical practice. It feels new, and people can be nervous about things that they havent experienced before. What would you say to people with those concerns?A: In general, AI tools can have a lot of potential. I think one also needs to really see what type of AI tool it is. If a physician is using it in practice, I think its really important to be very frank with the patient about it. What are the benefits? What are the risks? What may be issues which are unknown, so that the patient could have the choice to decide whether they want it to be used in their care, at least for the transition phase of where we are. Because at this point, the use of AI is not yet the standard of care. But, of course, the standard of care evolves. During this transition phase, its going to be essential that physicians communicate properly to patients.Q: What do you think SSRN brings to the world of research and scholarship?A: I think SSRN is great because, first of all, its a known platform. Its for free. Everyone can use it: its open access, so thats great. Also, we can upload forthcoming paper drafts early on, so that this can be spread across disciplines to other scholars before its even been published. I think these are all great advantages for scholars in general and give people access to the work as soon as possible.You can see more work by Sara Gerke on her SSRN Author page here.0 Comments 0 Shares 1 Views
-
BLOG.SSRN.COMThe Emerging Field of Climate Finance with Peter TufanoSSRNThe Emerging Field of Climate Finance with Peter TufanoRecently, SSRN announced the new Climate Finance eJournal, sponsored by the MSCI Sustainability Institute. This area includes content on the application of financial economics to climate change mitigation, adaptation, and resiliency. Subscribe to the Climate Finance eJournal for free here. Harvard Professor Peter Tufano, one of the eJournals editors, spoke with SSRN about climate finance as an emerging field and how his research fits into this growing body of work.Peter Tufano is a Baker Foundation Professor at Harvard Business School (HBS) and Senior Advisor to the Harvard Salata Institute for Climate and Sustainability. From 2011 to 2021, he served as the Peter Moores Dean at Sad Business School at the University of Oxford, where he championed a systems change element to business education. From 1989 to 2011, he was a Professor at HBS, where he oversaw the schools tenure and promotion processes, campus planning, and university relations and was the founding co-chair of the Harvard i-lab. His current work focuses on climate finance, climate alliances, and the financial impact of climate on households. His longer body of research and course development also spans financial innovation, financial engineering, and household finance. He and his co-Editor, Laura Starks, created the collaborative doctoral reading group, The Financial Economics of Climate and Sustainability.Q: Climate finance is an emerging field that has been growing more prominent in recent years. For those unfamiliar with the subject, could you explain what climate finance is?A: Climate finance is a subset of what you might call whole system finance, which is directing large flows of funding to address some of the biggest problems that need to be solved in the world. These problems are very large, very global, have very long time horizons, and sit on the boundary between private and public. Finally, they are consequentialand in the case of climateexistential.One way to solve these problems is to simply say thats the province of government. But often we need the expertise and capital of the private sector. Redirecting not only huge flows of money but also transforming systems, like energy systems, will require the combination of public and private finances and a host of techniques and tools and leadershipacross sectors.The other thing that makes climate finance interesting is its time frame. Most of the time in finance, we dont evaluate projects that have multi-generational outcomes. With any traditional discount rate, something that takes place 60 or 70 years in the future would have a value today of about zero. The normal approaches to valuation pretend as if the long-horizon future doesnt matter. Clearly it does, so that requires us to think a little bit differently.Succinctly, climate finance studies the tools and techniques that will direct large resources and risk-bearing to solve climate and planetary problems.Q: What kind of research is done within this field? What are some of the main goals of research and study within climate finance?A: The climate finance research community is still evolving, with different clusters of researchers pursuing different topics, depending on their prior work. Lets talk about the academic researchers first. If you approach this new field from a traditional academic finance perspective, then youd likely try to figure out how climate will change the way that we think about asset prices, financial intermediaries, household finance, corporate finance, and public finance.If you come from the policy end of finance, you might begin with bigger picture questions. For example, suppose that we were able to make investments to keep us on a 1.5- or 1.8-degree C. trajectory, demanding a meaningful percentage of global GDP. Where would that money come from? What might get crowded out? How would the math work?So depending on where you come from, you will be drawn to different questions. Therefore, we expect a range of approaches within the new SSRN eJournal, as there are already in this emerging field.Q: You serve on the advisory council for the MSCI Sustainability Institute and as a senior advisor to the Harvard Salata Institute for Climate and Sustainability. What insights do these roles give you into the future of where those research subtopics are headed?A: Let me start with the work at Salata. Most universities rely on a set of cylinders of excellence, as one of my colleagues called itmore commonly called silos. This is because in those distinct domain expertise areas, scholars can know their material exceptionally well. Climate, sadly, doesnt respect any academic boundaries. The first insight from the work at Salata is that our disciplinary boundaries are going to have to be more permeable so that we can be aware of and fully consider the science of climate change, the economics of climate change, and the organizational reality of affecting climate change. Addressing climate change rigorously will require the best of all of our disciplines.MSCI is a remarkable organization, and I cant do justice to describe all that it does, but surely, it is preeminent in collecting data that can be used to drive decisions. In this climate space, the data that were going to need will be highly multi-dimensional. As an example, much of finance and financial analysis is not place-based. But with climate issues, place matters. Were going to have to think about the implications of physical locations for the risks to which we are exposed. Just as MSCI has evolved over time to incorporate more and more decision-relevant data, work in climate will demand that use a wider set of data to do cutting edge and relevant research.Q: In a paper you co-authored called The Evolving Academic Field of Climate Finance, you say that the sheer scale of the greenhouse gas induced climate crisis will force us to rethink and refine our financial theories and practices. What would you say are some of the biggest challenges in terms of rethinking those theories and practices, especially considering that so much of this work is still evolving, uncharted territory?A: Let me offer three ideas we need to rethink. First, in any MBA class we value everything on the basis of private benefits to investors. We dont even try to value the social benefits or harms of projects. The first thing we have to do is to broaden how we evaluate projects, firms and initiatives to make more intelligent decisions. Second, again considering valuation, we use discount rates to bring monies back and forth in time. But these discount rates are inappropriate in considering very long horizon outcomes. At a discount rate of 4%, the value of a dollar at the turn of the next century is $0.05. This implies that the value of a human life is worth one-twentieth of a life todaya very important ethical concept. Finally, while we have been indoctrinated to believe that markets solve all problems, the core principles of economics remind us that this will only be true if there are no externalities, which is clearly not the case when the private cost of emissions remains essentially zero.I think students and academics have to be alerted to a broader set of questions. Where this starts, in my mind, is in education. We need to ultimately transform our educational systems and what we teach. But the only way were going to do that is to have professors who understand this space, which is why a number of us got together to found Financial Economics of Climate and Sustainability (FECS), a doctoral course that we offered across 130 schools last year and this year is welcoming research staffs at government agencies. FECS trains the next generation of doctoral students and researchers, who can be the next generation of professors and policy makers, who can then go and intelligently think through these issues and not only produce great research, but [also] communicate it in a way that makes it meaningful.Q: In the paper I mentioned earlier, you talk about the interdisciplinary nature of climate finance and how the vast impact of climate change really blurs the lines between areas of study that may have been distinctly separate before. How do different fields and perspectives help foster research that contributes to these big goals and big questions about sustainability and climate change?A: This evolution will happen in stages. I think the first stage will be the acknowledgement of the importance of this climate topic within disciplines and locating climate issues within existing fields. Before we get to interdisciplinary or multidisciplinary research, lets first understand how it affects each of our disciplines. I think that if we start by staying in our lanes and understanding the implications of climate in say, asset pricing or household finance, we will begin to be open to other disciplines.To foster the kind of true multi-disciplinarity that addresses whole system problems like climate will require confronting inherent tensions in academia. There are, in academia, various norms and practices, like how we evaluate candidates for tenure and which journals publish which papers. For mostly good reasons, both of these tend to use narrow definitions, largely to demonstrate the depth that we demand of excellent work. As a result, tenure decisions and journals are to a large degree defined by our core disciplines, not by the problems we address. There are some problem-based journals, and [the Climate Finance eJournal] is an example.A second, perhaps even more important consideration is, whats the channel to impact? How is it that this research is going to drive action? We need to think and act differently in order to have greater impact, which might involve expanding our definitions of excellent research, substantially improving research communications, or regularly having a new type of sabbatical where scholars can rotate into government and business to increase the impact and relevance of their work.Q: Youve spoken before about the fact that there are a lot of levers, a lot of different ways, to kickstart climate finance and progress. If these mechanisms for change exist, whats holding us back as researchers, businesses, society from acting on solutions? Wheres the turning point to go from theoretical ideas to taking the kind of action that youre talking about?A: Ive used the term kickstart in a number of different contexts, but the physical image of a lever is helpful. Systems change scholars often organize actions in terms of which have the most and the least leverage. What is the long run impact if we can change specific outcomes, [such as] passing a law? What if we routinely measure impact? What if we encourage different levels of collaboration? And at the far end, with the most leverage, how can we change the way that people think about problems?I think that there are promising examples where we are affecting system changes. We start with changing measurable things, and weve seen this in changes in disclosure policies. The huge pushback in the U.S. against climate disclosure almost surely reflects some groups fears that this disclosure would show the harms that they are causing. Blended finance and climate finance is about the merging of public and private fundingnew forms of collaboration. We need to change and create new feedback loops. We are doing that through materializing demand through advanced market commitments where buyers signal future demand by orders in advance. We are seeing change happening through tax policies both carrots (like the U.S. IRA) or sticks (like the European Carbon Border Mechanism). We are seeing change happening through collaboration, and in particular, alliances. We are seeing this change in the discussion moving from shareholder to stakeholder capitalism.We are starting to see people move from this thinking of the climate issue as a nice thing that tree huggers do to something that is going to affect all of us and therefore we all have a responsibility to do something about. If you look at the levers for systems change, which are practical, structural, and cultural, I can see examples of all of them where we are making progress. But not enough progressand not fast enough, according to the most recent science.Q: So there is a bright future ahead in all those areas?A: I dont know if Id say bright future. I, and others this is not my original idea think theres a major difference between optimism and hope. Optimism is a statistical belief that the future will be better, and hope is more of a belief that with certain actions, theres a chance that the future could be better. I dont know that Im an optimistic person, but I am a hopeful person, and I think we have to be.Q: Are there any research focuses specifically you think will be especially promising in the coming years? What kind of things should we keep an eye out for?A: Theres so much new, interesting work going on right now. I was just chairing a session at a big banking conference with new work on how banks are incorporating climate into their lending decisions. My colleagues are doing more exciting work on how the insurance sector can play a bigger role in reducing emissions and in the financing transition. Theres serious and difficult work to hold various groups accountable, by studying those who make promises and then dont follow up on them; or say one thing and then lobby to do other things. We have to call that out. Im hard pressed to think about what there isnt to do.As were launching this SSRN eJournal, the initial base of papers that were going to have will probably be around 1000. In 10 years, I think that number could easily be 10 to 20 times that. Collectively, I hope that these papers will not only add to our understanding of how finance can change the world, but also help turn these ideas into action.Q: What are some of your current research interests?A: I am very interested in climate alliances. The dominant way of thinking in business is that competition is the natural order and societies will progress by firms competing with one another. Surely, thats true to some extent. But in the climate spacewhere there are huge externalitiesthis model breaks down. I think theres potentially an important role for collaboration: both collaboration between firms and collaboration between firms and governments. We need to understand how collaboration in the climate space can complement private competitive activity and government action. We need to study this both theoretically and empirically, and I am working actively on this question. Im also doing some work in the boundary between household finance and climate, linking my old and new research agendas. Finally, Im very excited about new work by young scholars linking insurance and climate and hope to contribute to this very new field.Q: Is there anything else youd like to add about climate finance or your work?A: When I returned to Harvard to teach after a decade of being a Dean at Oxford, I decided that I wanted to teach a doctoral course in climate finance, in part as a service to the school, but also as a way to get current on the latest literature. As a result of doing that, I reached out to people in the profession about what they were teaching in their doctoral classes. I rapidly learned that no major school had a doctoral course on climate finance.So ten of us got together and said, Why dont we collaboratively put together the syllabus? And why dont we teach it across our schools? In 2025, were going to run version 3.0 of Financial Economics of Climate and Sustainability. We will reach doctoral students and researchers at over 100 schools, and this year, well also be welcoming the research staffs at major financial regulators. We summarize the newest content in this space, and each local school customizes the course to fit their own circumstances. Its an example of how collaboration can be catalytic in the climate space, at least in our small way.If you look at the names of the teaching group, they will be familiar because theyre the Advisory Board for this journal and my co-editor, Laura Starks. Whats fascinating is that they all had hugely successful research careers before they pivoted to study climate. This is instructive because it shows that we can transform our research and teaching, starting one person at a time. Finally, we are all doing this as volunteers, for the benefit of a thousand future finance professors. But if we go beyond that, why not make all of this research available even more widely? When I joined the MSCI Advisory Board, I mentioned this idea to them. Id already edited two SSRN eJournals in the past, so it wasnt hard to link MSCI, SSRN, and this amazing group of scholars that I am privileged to work with to create this new Climate Finance eJournal.To see more work by Peter Tufano, visit his SSRN Author page here.Is there an eJournal you want to sponsor? Contact sales@ssrn.com for more information.0 Comments 0 Shares 1 Views
-