
This Week in AI: Billionaires talk automating jobs away
techcrunch.com
Hiya, folks, welcome to TechCrunchs regular AI newsletter. If you want this in your inbox every Wednesday, sign uphere.You mightve noticed we skipped the newsletter last week. The reason? A chaotic AI news cycle made even more pandemonious by Chinese AI company DeepSeeks sudden rise to prominence, and the response from practically ever corner of industry and government.Fortunately, were back on track and not a moment too soon, considering last weekends newsy developments from OpenAI. OpenAI CEO Sam Altman stopped over in Tokyo to have an onstage chat with Masayoshi Son, the CEO of Japanese conglomerate SoftBank. SoftBank is a major OpenAI investor and partner, having pledged to help fund OpenAIs massive data center infrastructure project in the U.S.So Altman probably felt he owed Son a few hours of his time.What did the two billionaires talk about? A lot of abstracting work away via AI agents, per secondhand reporting. Son said his company would spend $3 billion a year on OpenAI products and would team up with OpenAI to develop a platform, Cristal [sic] Intelligence, with the goal of automating millions of traditionally white-collar workflows.By automating and autonomizing all of its tasks and workflows, SoftBank Corp. will transform its business and services, and create new value, SoftBank said in a press release Monday.I ask, though, what the humble worker is to think about all this automating and autonomizing? Like Sebastian Siemiatkowski, the CEO of fintech Klarna, who often brags about AI replacing humans, Son seems to be of the opinion that agentic stand-ins for workers can only precipitate fabulous wealth. Glossed over is the cost of the abundance. Should the widespread automation of jobs come to pass, unemployment on an enormous scale seems the likeliest outcome.Its discouraging that those at the forefront of the AI race companies like OpenAI and investors like SoftBank choose to spend press conferences painting a picture of automated corporations with fewer workers on the payroll. Theyre businesses, of course not charities. And AI development doesnt come cheap. But perhaps people would trust AI if those guiding its deployment showed a bit more concern for their welfare.Food for thought.NewsDeep research: OpenAI has launched a newAI agentdesigned to help people conduct in-depth, complex research usingChatGPT, the companys AI-powered chatbot platform.O3-mini: In other OpenAI news, the company launched a new AI reasoning model, o3-mini, following a preview last December. Its not OpenAIs most powerful model, but o3-mini boasts improved efficiency and response speed.EU bans risky AI: As of Sunday in the European Union, the blocs regulators can ban the use of AI systems they deem to pose unacceptable risk or harm. That includes AI used for social scoring and subliminal advertising.A play about AI doomers: Theres a new play out about AI doomer culture, loosely based onSam Altmans ousting as CEO of OpenAIin November 2023.My colleagues Dominic and Rebecca share their thoughts after watching the premiere.Tech to boost crop yields: Googles X moonshot factory this week announced its latest graduate.Heritable Agricultureis a data- and machine learning-driven startup aiming to improve how crops are grown.Research paper of the weekReasoning models are better than your average AI at solving problems, particularly science- and math-related queries. But theyre no silver bullet.A new study from researchers at Chinese company Tencent investigates the issue of underthinking in reasoning models, where models prematurely, inexplicably abandon potentially promising chains of thought. Per the studys results, underthinking patterns tend to occur more frequently with harder problems, leading models to switch between reasoning chains without arriving at answers.The team proposes a fix that employs a thought-switching penalty to encourage models to thoroughly develop each line of reasoning before considering alternatives, boosting models accuracy.Model of the weekImage Credits:YuEA team of researchers backed by TikTok owner ByteDance, Chinese AI company Moonshot, and others released a new open model capable of generating relatively high-quality music from prompts. The model, called YuE, can output a song up to a few minutes in length complete with vocals and backing tracks. Its under an Apache 2.0 license, meaning the model can be used commercially without restrictions.There are downsides, however. Running YuE requires a beefy GPU; generating a 30-second song takes six minutes with an Nvidia RTX 4090. Moreover, its not clear if the model was trained using copyrighted data; its creators havent said. If it turns out copyrighted songs were indeed in the models training set, users could face future IP challenges.Grab bagImage Credits:AnthropicAI lab Anthropic claims that it has developed a technique to more reliably defend against AI jailbreaks, the methods that can be used to bypass an AI systems safety measures.The technique, Constitutional Classifiers, relies on two sets of classifier AI models: an input classifier and an output classifier. The input classifier appends prompts to a safeguarded model with templates describing jailbreaks and other disallowed content, while the output classifier calculates the likelihood that a response from a model discusses harmful info.Anthropic says that Constitutional Classifiers can filter the overwhelming majority of jailbreaks. However, it comes at a cost. Each query is 25% more computationally demanding, and the safeguarded model is 0.38% less likely to answer innocuous questions.
0 Comments
·0 Shares
·76 Views