TIME.COM
TIME100 Impact Dinner London: AI Leaders Discuss Responsibility, Regulation, and Text as a Relic of the Past
Jade Leung, Victor Riparbelli, and Abeba Birhane participate in a panel during the TIME100 Impact Dinner London on Oct. 16, 2024.TIMEBy Tharin PillayUpdated: October 16, 2024 10:56 PM EDT | Originally published: October 16, 2024 9:03 PM EDTOn Wednesday, luminaries in the field of AI gathered at Serpentine North, a former gunpowder store turned exhibition space, for the inaugural TIME100 Impact Dinner London. Following a similar event held in San Francisco last month, the dinner convened influential leaders, experts, and honorees of TIMEs 2023 and 2024 100 Influential People in AI listsall of whom are playing a role in shaping the future of the technology.Following a discussion between TIMEs CEO Jessica Sibley and executives from the events sponsorsRosanne Kincaid-Smith, group chief operating officer at Northern Data Group, and Jaap Zuiderveld, Nvidias VP of Europe, the Middle East, and Africaand after the main course had been served, attention turned to a panel discussion. The panel featured TIME 100 AI honorees Jade Leung, CTO at the U.K. AI Safety Institute, an institution established last year to evaluate the capabilities of cutting-edge AI models; Victor Riparbelli, CEO and co-founder of the UK-based AI video communications company Synthesia; and Abeba Birhane, a cognitive scientist and adjunct assistant professor at the School of Computer Science and Statistics at Trinity College Dublin, whose research focuses on auditing AI models to uncover empirical harms. Moderated by TIME senior editor Ayesha Javed, the discussion focused on the current state of AI and its associated challenges, the question of who bears responsibility for AIs impacts, and the potential of AI-generated videos to transform how we communicate.The panelists' views on the risks posed by AI reflected their various focus areas. For Leung, whose work involves assessing whether cutting-edge AI models could be used to facilitate cyber, biological or chemical attacks, and evaluating models for any other harmful capabilities more broadly, focus was on the need to get our heads around the empirical data that will tell us much more about what's coming down the pike and what kind of risks are associated with it.Birhane, meanwhile, emphasized what she sees as the massive hype around AIs capabilities and potential to pose existential risk. These models dont actually live up to their claims. Birhane argued that AI is not just computational calculations. It's the entire pipeline that makes it possible to build and to sustain systems, citing the importance of paying attention to where data comes from, the environmental impacts of AI systems (particularly in relation to their energy and water use), and the underpaid labor of data-labellers as examples. There has to be an incentive for both big companies and for startups to do thorough evaluations on not just the models themselves, but the entire AI pipeline, she said. Riparbelli suggested that both fixing the problems already in society today and thinking about Terminator-style scenarios are important, and worth paying attention to.Panelists agreed on the vital importance of evaluations for AI systems, both to understand their capabilities and to discern their shortfalls when it comes to issues, such as the perpetuation of prejudice. Because of the complexity of the technology and the speed at which the field is moving, best practices for how you deal with different safety challenges change very quickly, Leung said, pointing to a big asymmetry between what is known publicly to academics and to civil society, and what is known within these companies themselves.The panelists further agreed that both companies and governments have a role to play in minimizing the risks posed by AI. Theres a huge onus on companies to continue to innovate on safety practices, said Leung. Riparbelli agreed, suggesting companies may have a moral imperative to ensure their systems are safe. At the same time, governments have to play a role here. That's completely non-negotiable, said Leung.Equally, Birhane was clear that effective regulation based on empirical evidence is necessary. A lot of governments and policy makers see AI as an opportunity, a way to develop the economy for financial gain, she said, pointing to tensions between economic incentives and the interests of disadvantaged groups. Governments need to see evaluations and regulation as a mechanism to create better AI systems, to benefit the general public and people at the bottom of society.When it comes to global governance, Leung emphasized the need for clarity on what kinds of guardrails would be most desirable, from both a technical and policy perspective. What are the best practices, standards, and protocols that we want to harmonize across jurisdictions? she asked. Its not a sufficiently-resourced question. Still, Leung pointed to the fact that China was party to last years AI Safety Summit hosted by the U.K. as cause for optimism. Its very important to make sure that theyre around the table, she said.One concrete area where we can observe the advance of AI capabilities in real-time is AI-generated video. In a synthetic video created by his companys technology, Riparbellis AI double declared text as a technology is ultimately transitory and will become a relic of the past. Expanding on the thought, the real Riparbelli said: We've always strived towards more intuitive, direct ways of communication. Text was the original way we could store and encode information and share time and space. Now we live in a world where for most consumers, at least, they prefer to watch and listen to their content.He envisions a world where AI bridges the gap between text, which is quick to create, and video, which is more labor-intensive but also more engaging. AI will enable anyone to create a Hollywood film from their bedroom without needing more than their imagination, he said. This technology poses obvious challenges in terms of its ability to be abused, for example by creating deepfakes or spreading misinformation, but Riparbelli emphasizes that his company takes steps to prevent this, noting that every video, before it gets generated, goes through a content moderation process where we make sure it fits within our content policies.Riparbelli suggests that rather than a technology-centric approach to regulation on AI, the focus should be on designing policies that reduce harmful outcomes. Let's focus on the things we don't want to happen and regulate around those. The TIME100 Impact Dinner London: Leaders Shaping the Future of AI was presented by Northern Data Group and Nvidia Europe.More Must-Reads from TIMEHow the Economy is Doing in the Swing StatesHarris Battles For the Bro VoteOur Guide to Voting in the 2024 ElectionMel Robbins Will Make You Do ItWhy Vinegar Is So Good for YouYou Dont Have to Dread the End of Daylight SavingThe 20 Best Halloween TV Episodes of All TimeMeet TIME's Newest Class of Next Generation LeadersContact us at letters@time.com
0 Comments 0 Shares 151 Views