BLOG.SSRN.COM
Meet the Author: Sara Gerke
SSRNMeet the Author: Sara GerkeSara Gerke is an Associate Professor of Law and Richard W. & Marie L. Corman Scholar at the University of Illinois College of Law. Her research focuses on the ethical and legal challenges of artificial intelligence and big data for health care and health law in the United States and Europe. She also researches comparative law and ethics of other issues at the cutting edge of medical developments, biological products, reproductive medicine, and digital health more generally. Professor Gerke has over 60 publications in health law and bioethics, and her work has appeared in leading law, medical, scientific, and bioethics journals. She is leading several research projects, including CLASSICA (Validating AI in Classifying Cancer in Real-Time Surgery) and OperA (Optimizing Colorectal Cancer Prevention Through Personalized Treatment With Artificial Intelligence), both funded by the European Union. She spoke with SSRN about the legal and ethical challenges of integrating artificial intelligence into the medical field and how much farther we still have to go.Q: Youve led and been part of research projects that work on things like artificial intelligence (AI) in healthcare and the legal and ethical implications of integrating new and cutting-edge technology into medical practice. What has driven your interest and motivation within this field of study?A: I moved to the U.S. about six years ago. Previously, I was the General Manager of the Institute for German, European and International Medical Law, Public Health Law and Bioethics of the Universities of Heidelberg and Mannheim, and I was also doing a lot of grant writing. I already felt like we were getting to the point where we have to submit applications for technology, because thats the new world, where its heading towards. It takes quite some time until you actually can execute a grant. I somehow got to know about this project [in the U.S.], and I was really excited about it. It would have taken probably another two years until I could carry out something similar in Germany. Long term, I always thought to pursue an international career so that I can write more articles in English and really spread my work around the world. I applied for that position, was very lucky to get it, and was responsible for the day-to-day work of the Project on Precision Medicine, Artificial Intelligence, and the Law (PMAIL) at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics at Harvard Law School. I feel these days everyone is very aware of AI, at the latest when ChatGPT came on the market. But six years ago, we were one of the very small groups who really looked at AI and digital technology at that time in the healthcare field from an ethical but also legal perspective.Q: Your most downloaded paper on SSRN, Ethical and Legal Challenges of Artificial Intelligence-Driven Healthcare, written in 2020, highlights several main concerns regarding AI in healthcare, both in the legal and ethical sense. In the four years since you wrote this, which legal or ethical problems do you think have made the most progress?A: I dont think we made enough progress in those years since I wrote that paper. This paper has been cited so much because we were essentially the first who wrote a paper on this topic. At that point, it didnt have much interest yet. But what we are seeing, comparatively because I do a lot of comparative law between the EU and the U.S. on the regulatory front is a lot has happened since then. The Food and Drug Administration (FDA) has thought a lot about how to regulate AI and machine learning (ML)-based medical devices. Also, we have seen in the EU just recently the AI Act, the worldwide first regulatory framework for AI.In the last probably two years now, people are very aware of questions of liability and liability risk when introducing those tools into the healthcare system and into the hospitals. The FDA has already authorized over 950 AI/ML-based medical devices. And frankly, not all of the AI tools are considered or classified as medical devices. We have a lot of in-house developments and other particular clinical decision tools that fall outside of FDA regulation. Those tools [have] already been implemented in healthcare. In recent years, these questions of whos going to be held liable if something goes wrong? are much more pressing.Q: How did working on this research early on inform what you do now?A: I feel very fortunate that I have been working in this field for such a long time now to build my expertise. I can now adapt to new challenges relatively easily because I have a long experience of going deep into the legal and ethical issues of AI. Im doing a lot of interdisciplinary work. I work with physicians, with engineers, and so I can always get new ideas from those collaborations. Every day you read something [new] about AI: its hard to keep up at that pace. So, some groundwork of knowledge is really helpful to be able to catch up on all the new developments happening every single day.Q: Youve done a lot of work regarding medical AI applications used for self-diagnosing and the importance of labeling these apps properly. Talk a little bit about why its important to educate people about the function and purpose of self-diagnosing AI apps like these.A: Im still doing a lot of work on the questions of labeling, because I think its so important. Labeling is just one small piece of this entire puzzle. I am a believer in labeling AI, in particular in the healthcare field. Imagine two different types of tools. You can first imagine a tool which is being used by a physician in a hospital. The label is directed to the healthcare provider and [those] who need information about the tool to assess whether it makes sense to use that tool safely on their patients.On the other hand, we have a lot of direct-to-consumer AI tools, including apps. Here, they also need to get some type of information. This information might potentially be a little different in detail. A consumer might care about other things like privacy, if this is safe to use, or if data is properly protected while a physician might need to get more granular information about if this dataset [has] been properly tested and breakdowns of race, ethnicity, location, age, gender, etc. That information is really important for physicians to know, and unfortunately, we dont have proper standards in place at this point which require manufacturers to disclose that. The Office of the National Coordinator for Health IT (ONC) has recently put out a rule and requires some type of transparency and some kind of nutrition label. [Its] really nice to see that regulators are now finally doing something in regard to labeling, but its just a tiny amount of tools that are being covered by that rule. Im still waiting and hoping for the FDA to change that. At some point, hopefully all AI tools have some type of label, so that users are getting proper information. Sometimes one gets the criticism, well, no one reads labels, but its better to have that information as an option to read it, if you want to. I think there is a necessity to require that disclosure.Q: Youve pointed out in your work that while many of these apps might be labeled as information only tools rather than actual diagnoses, people sometimes misperceive them as real diagnoses. Do you see this misperception as an example or indicator of some sort of larger issue with how medical AI tools are understood by consumers?A: The issue that we see in some of these apps is that they for example, the electrocardiogram app from Apple are actually information tools and not [meant] to make a diagnosis. But if people are using those tools for [something] like skin cancer screenings, they usually tend to believe whatever the AI tool gives them. [They] might just skip a doctor visit because they are scanning with their app, and [think], oh, it seems I dont have any cancer. Everything looks good, so I dont need to necessarily go to the doctor right now. We see a lot of empirical data on that, that the consumers are perceiving those tools as diagnostic tools. But if one looks at the language, all of those manufacturers are clearly articulating that a direct-to-consumer tool, often available without any prescription, doesnt replace going to the doctor. I think we need to have measures [and] more education around it. The majority of health AI apps or general health apps are not being reviewed by the FDA: they fall outside of the Federal Food, Drug, and Cosmetic Act. So, it is hard for consumers to assess whether this app is reliable or not, because consumers usually just put into the App Store what they are looking [for], and whatever pops up first, they likely are going to download.Q: AI has developed very quickly in the past few years, and it has raised problems that nobody really had to think about before. Because of this, theres a retroactive response to certain problems. How do you think researchers can begin to anticipate where the field is headed, to ensure that AI is used responsibly in healthcare?A: I personally dont have a crystal ball, and I think thats the problem. Because honestly, if you would have asked people maybe three [or] four years ago, I dont think they would have suggested that wed come [this] far so quickly. We have these generative AI tools that have incredible capacities but also raise a lot of new issues which we have not anticipated. Once you get it out of the box, its hard to get it back into the box.Thats a problem now, because retroactively making laws around this is really challenging, and we are not seeing right now in the U.S. that thats going to necessarily change. I think the approach in the U.S. is going to be more like a mosaic style. We have different regulators, and everyone is going to do some stuff in their wheelhouse. Hopefully, there will be enough collaboration and understanding that its going to be a mosaic or puzzle to be completed, rather than overlapping and making it much more complicated for stakeholders to understand and oversee.Q: Are there any papers, projects, or research you are working on right now that youre particularly excited about?A: I always have a lot of projects going on, because AI keeps me busy. I have several research projects Im involved in. Im leading two projects in the ethics and legal field, which are called CLASSICA and OperA, and they are funded by the European Union. Ive also been one of the PIs (principal investigators) of an National Institute of Biomedical Imaging and Bioengineering (NIBIB) and the National Institutes of Health Office of the Director (NIH OD) grant on technology. There are in particular in the two projects funded by the European Union clinical trials being carried out. One is for colon cancer prevention. AI tools are being tested to see if they are going to be helpful in the long run. Similarly, [in] the CLASSICA project, thats a project on AI-assisted surgery. The surgeons are testing an AI tool that can predict whether tissue is benign or malignant in real-time during a surgery.Im not involved in the clinical trials, but for me, its really exciting, because my team is looking at some of the legal and ethical issues, such as, do surgeons have any reluctancy to implement such a tool and use it in the operation room? [and] are there any liability risks that they are worried about?My work keeps me busy in the liability space, but also in regulation and the questions of, how should regulators in the U.S. and Europe regulate more complex tools like generative AI? Thats going to be a real challenge.There are many questions, and in the liability space in particular, the more sophisticated the AI tool becomes, the more interesting and unsettled the question is. How does tort law deal with an AI tool that practices medicine? An AI tool is not considered a legal person at this point. So at some point in the future, you might have an AI tool that is so sophisticated [that] its a standard of care and is implemented in a hospital. It might be that you cant find any human fault in the physician using the AI tool: the physician needs to rely on the output of the AI tool because its so complex. And then theres this question of, if harm occurs, whos going to be held liable if you cant find human fault in the physician? Because it was totally fine of the physician to rely on the AI tool in the first place. These are questions which we need to tackle in the future, once such AI is going to be implemented in the healthcare field.Q: How long do some of these clinical trials take?A: The ones carried out in the projects I am involved in last for several years. These are long-term projects spanning four to five years, and I think they are [some] of the rare occasions where clinical trials are actually being carried out in the AI field. The majority of AI tools, in fact, especially in the U.S., have not undergone any clinical trial studies. How the Federal Food, Drug, and Cosmetic Act works is that there is a pathway, which is called a 510(k) pathway. What you need to show as a manufacturer is that your device is substantially equivalent to another legally marketed device, which usually does not require any clinical evidence. We have seen that the majority of the AI-based medical devices that were authorized by the FDA went through the 510(k), so in most cases, there is no necessity to show any clinical data. And so thats what we are seeing. Clinical trials are rare in the field. But, of course, clinical trials should not hamper innovation. Its hard, because if you have a so-called adaptive AI tool that continuously learns, you could have a clinical trial, but how much of the results of the clinical trial data will still help you in the long run, [given that] the tool is constantly changing and adapting? If we are getting fully adaptive systems its going to be even more difficult to make sure that they stay safe and effective. You will probably need to have an ongoing monitoring system in place to be able to tackle that issue.Q: There are many people that are still wary of the idea of AI being used in medical practice. It feels new, and people can be nervous about things that they havent experienced before. What would you say to people with those concerns?A: In general, AI tools can have a lot of potential. I think one also needs to really see what type of AI tool it is. If a physician is using it in practice, I think its really important to be very frank with the patient about it. What are the benefits? What are the risks? What may be issues which are unknown, so that the patient could have the choice to decide whether they want it to be used in their care, at least for the transition phase of where we are. Because at this point, the use of AI is not yet the standard of care. But, of course, the standard of care evolves. During this transition phase, its going to be essential that physicians communicate properly to patients.Q: What do you think SSRN brings to the world of research and scholarship?A: I think SSRN is great because, first of all, its a known platform. Its for free. Everyone can use it: its open access, so thats great. Also, we can upload forthcoming paper drafts early on, so that this can be spread across disciplines to other scholars before its even been published. I think these are all great advantages for scholars in general and give people access to the work as soon as possible.You can see more work by Sara Gerke on her SSRN Author page here.
0 Comments 0 Shares 23 Views