
How US AI policy might change under Trump
www.technologyreview.com
This story is from The Algorithm, our weekly newsletter on AI. To get it in your inbox first,sign up here.President Biden first witnessed the capabilities of ChatGPT in 2022 during a demo from Arati Prabhakar, the director of the White House Office of Science and Technology Policy, in the oval office. That demo set a slew of events into motion and encouraged President Biden to support the USs AI sector while managing the safety risks that will come from it.Prabhakar was a key player in passing the presidents executive order on AI in 2023, which sets rules for tech companies to make AI safer and more transparent (though it relies on voluntary participation). Before serving in President Bidens cabinet, she held a number of government roles, from rallying for domestic production of semiconductors to heading up DARPA, the Pentagons famed research department.I had a chance to sit down with Prabhakar earlier this month. We discussed AI risks, immigration policies, the CHIPS Act, the publics faith in science, and how it all may change under Trump.The change of administrations comes at a chaotic time for AI.Trumps team has not presented a clear thesis on how it will handle artificial intelligence, but plenty of people in it want to see that executive order dismantled. Trump said as much in July, endorsing theRepublican platformthat says the executive order hinders AI innovation and imposes Radical Leftwing ideas on the development of this technology. Powerful industry players, like venture capitalist Marc Andreessen, have said they support that move.However, complicating that narrative will be Elon Musk, who for years has expressed fears about doomsday AI scenarios and has been supportive of some regulations aiming to promote AI safety. No one really knows exactly whats coming next, but Prabhakar has plenty of thoughts about whats happened so far.For her insights about the most important AI developments of the last administration, and what might happen in the next one,read my conversation with Arati Prabhakar.Now read the rest of The AlgorithmDeeper LearningThese AI Minecraft characters did weirdly human stuff all on their ownThe video game Minecraft is increasingly popular as a testing ground for AI models and agents. Thats a trend startup Altera recently embraced. It unleashed up to 1,000 software agents at a time, powered by large language models (LLMs), to interact with one another. Given just a nudge through text prompting, they developed a remarkable range of personality traits, preferences, and specialist roles, with no further inputs from their human creators. Remarkably, they spontaneously made friends, invented jobs, and even spread religion.Why this matters:AI agents can execute tasks and exhibit autonomy, taking initiative in digital environments. This is another example of how the behaviors of such agents, with minimal prompting from humans, can be both impressive and downright bizarre. The people working to bring agents into the world have bold ambitions for them. Alteras founder, Robert Yang sees the Minecraft experiments as an early step towards large-scale AI civilizations with agents that can coexist and work alongside us in digital spaces. The true power of AI will be unlocked when we have truly autonomous agents that can collaborate at scale, says Yang.Read more from Niall Firth.Bits and BytesOpenAI is exploring advertisingBuilding and maintaining some of the worlds leading AI models doesnt come cheap. The Financial Times has reported that OpenAI is hiring advertising talent from big tech rivals in a push to increase revenues. (Financial Times)Landlords are using AI to raise rents, and cities are starting to push backRealPage is a tech company that collects proprietary lease information on how much renters are paying and then uses an AI model to suggest to realtors how much to charge on apartments. Eight states and many municipalities have joined antitrust suits against the company, saying it constitutes an unlawful information-sharing scheme and inflates rental prices. (The Markup)The way we measure progress in AI is terribleWhenever new models come out, the companies that make them advertise how they perform in benchmark tests against other models. There are even leaderboards that rank them. But new research suggests these measurement methods arent helpful. (MIT Technology Review)Nvidia has released a model that can create sounds and musicAI tools to make music and audio have received less attention than their counterparts that create images and video, except when the companies that make them getsued. Now, chip maker Nvidia has entered the space with a tool that creates impressive sound effects and music. (Ars Technica)Artists say they leaked OpenAIs Sora video model in protestMany artists are outraged at the tech company for training its models on their work without compensating them. Now, a group of artists who were beta testers for OpenAIs Sora model say they leaked it out of protest. (The Verge)
0 Comments
·0 Shares
·88 Views