GIZMODO.COM
2024 Showed It Really Is Possible to Rein in AI
By Todd Feathers Published December 25, 2024 | Comments (0) | The AI executive order Joe Biden signed in 2023 paved the way for much of the federal government's work in 2024. Bloomberg via Getty Images Nearly all the big AI news this year was about how fast the technology is progressing, the harms its causing, and speculation about how soon it will grow past the point where humans can control it. But 2024 also saw governments make significant inroads into regulating algorithmic systems. Here is a breakdown of the most important AI legislation and regulatory efforts from the past year at the state, federal, and international levels. State U.S. state lawmakers took the lead on AI regulation in 2024, introducing hundreds of billssome had modest goals like creating study committees, while others would have imposed serious civil liability on AI developers in the event their creations cause catastrophic harm to society. The vast majority of the bills failed to pass, but several states enacted meaningful legislation that could serve as models for other states or Congress (assuming Congress ever starts functioning again).As AI slop flooded social media ahead of the election, politicians in both parties got behind anti-deepfake laws. More than 20 states now have prohibitions against deceptive AI-generated political advertisements in the weeks immediately before an election. Bills aimed at curbing AI-generated pornography, particularly images of minors, also received strong bipartisan support in states including Alabama, California, Indiana, North Carolina, and South Dakota. Unsurprisingly, given that its the backyard of the tech industry, some of the most ambitious AI proposals came out of California. One high-profile bill would have forced AI developers to take safety precautions and held companies liable for catastrophic damages caused by their systems. That bill passed both bodies of the legislature amid a fierce lobbying effort but was ultimately vetoed by Governor Gavin Newsom.Newsom did, however, sign more than a dozen other bills aimed at less apocalyptic but more immediate AI harms. One new California law requires health insurers to ensure that the the AI systems they use to make coverage determinations are fair and equitable. Another requires generative AI developers to create tools that label content as AI-generated. And a pair of bills prohibits the distribution of a dead persons AI-generated likeness without prior consent and mandates that agreements for living peoples AI-generated likenesses must clearly specify how the content will be used. Colorado passed a first-of-its-kind in the U.S. law requiring companies that develop and use AI systems to take reasonable steps to ensure the tools arent discriminatory. Consumer advocates called the legislation an important baseline. Its likely that similar bills will be hotly debated in other states in 2025.And, in a middle finger to both our future robot overlords and the planet, Utah enacted a law that prohibits any governmental entity from granting legal personhood to artificial intelligence, inanimate objects, bodies of water, atmospheric gases, weather, plants, and other non-human things. Federal Congress talked a lot about AI in 2024, and the House ended the year by releasing a 273-page bipartisan report outlining guiding principles and recommendations for future regulation. But when it came to actually passing legislation, federal lawmakers did very little.Federal agencies, on the other hand, were busy all year trying to meet the goals set out in President Joe Bidens 2023 executive order on AI. And several regulators, particularly the Federal Trade Commission and Department of Justice, cracked down on misleading and harmful AI systems. The work agencies did to comply with the AI executive order wasnt particularly sexy or headline grabbing, but it laid important foundations for the governance of public and private AI systems in the future. For example, federal agencies embarked on an AI-talent hiring spree and created standards for responsible model development and harm mitigation.And, in a big step toward increasing the publics understanding of how the government uses AI, the Office of Management and Budget wrangled (most of) its fellow agencies into disclosing critical information about the AI systems they use that may impact peoples rights and safety. On the enforcement side, the FTCs Operation AI Comply targeted companies using AI in deceptive ways, such as to write fake reviews or provide legal advice, and it sanctioned AI-gun detection company Evolv for making misleading claims about what its product could do. The agency also settled an investigation with facial recognition company IntelliVision, which it accused of falsely saying its technology was free of racial and gender bias, and banned the pharmacy chain Rite Aid from using facial recognition for five years after an investigation determined the company was using the tools to discriminate against shoppers.The DOJ, meanwhile, joined state attorneys general in a lawsuit accusing the real estate software company RealPage of a massive algorithmic price-fixing scheme that raised rents across the nation. It also won several anti-trust lawsuits against Google, including one involving the companys monopoly over internet searchesthat could significantly shift the balance of power in the burgeoning AI search industry. Global In August, the European Unions AI Act went into effect. The law, which is already serving as a model for other jurisdictions, requires AI systems that perform high-risk functions, such as assisting with hiring or medical decisions, to undergo risk mitigation and meet certain standards around training data quality and human oversight. It also bans the use of other AI systems, such as algorithms that could be used to assign a countrys residents social scores that are then used to deny rights and privileges. In September, China issued a major AI safety governance framework. Like similar frameworks published by the U.S. National Institute of Standards and Technology, its non-binding but creates a common set of standards for AI developers to follow when identifying and mitigating risks in their systems.One of the most interesting pieces of AI policy legislation comes from Brazil. In late 2024, the countrys senate passed a comprehensive AI safety bill. It faces a challenging road forward, but if passed, it would create an unprecedented set of protections for the kinds of copyrighted material commonly used to train generative AI systems. Developers would have to disclose which copyrighted material was included in their training data, and creators would have the power to prohibit the use of their work for training AI systems or negotiate compensation agreements that would be based, in part, on the size of the AI developer and how the material would be used. Like the EUs AI Act, the proposed Brazilian law would also require high-risk AI systems to follow certain safety protocols.Daily NewsletterYou May Also Like By Matt Novak Published December 23, 2024 By Kyle Barr Published December 18, 2024 By Florence Ion Published December 16, 2024 By Matthew Gault Published December 16, 2024 By Kyle Barr Published December 16, 2024 By Todd Feathers Published December 15, 2024
0 التعليقات
0 المشاركات
47 مشاهدة