TECHCRUNCH.COM
This Week in AI: More capable AI is coming, but will its benefits be evenly distributed?
Hiya, folks, welcome to TechCrunchs regular AI newsletter. If you want this in your inbox every Wednesday, sign uphere.The AI news cycle didnt slow down much this holiday season. Between OpenAIs 12 days of shipmas and DeepSeeks major model release on Christmas Day, blink and youd miss some new development.And its not slowing down now. On Sunday, OpenAI CEO Sam Altman said in a post on his personal blog that he thinks OpenAI knows how to build artificial general intelligence (AGI) and is beginning to turn its aim to superintelligence.AGI is a nebulous term, but OpenAI has its own definition: highly autonomous systems that outperform humans at most economically valuable work.As for superintelligence, which Altman understands to be a step beyond AGI, he said in the blog post that it could massively accelerate innovation well beyond what humans are capable of achieving on their own.[OpenAI continues] to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes, Altman wrote. Altman like OpenAI rival Anthropics CEO, Dario Amodei is of the optimistic belief that AGI and superintelligence will lead to wealth and prosperity for all. But assuming AGI and superintelligence are even feasible without new technical breakthroughs, how can we be sure theyll benefit everyone?A recent concerning data point is a study flagged by Wharton professor Ethan Mollick on X early this month. Researchers from the National University of Singapore, University of Rochester, and Tsinghua University investigated the impact of OpenAIs AI-powered chatbot, ChatGPT, on freelancers across different labor markets.The study identified an economic AI inflection point for different job types. Before the inflection point, AI boosted freelancer earnings. For example, web developers saw a ~65% increase. But after the inflection point, AI began replacing freelancers. Translators saw an approximate 30% drop.The study suggests that once AI starts replacing a job, it doesnt reverse course. And that should concern all of us if more capable AI is indeed on the horizon.Altman wrote in his post that hes pretty confident that everyone will see the importance of maximizing broad benefit and empowerment in the age of AGI and superintelligence. But what if hes wrong? What if AGI and superintelligence arrive, and only corporations have something to show for it? The result wont be a better world, but more of the same inequality. And if thats AIs legacy, itll be a deeply depressing one.NewsImage Credits:Moor Studio / Getty ImagesSilicon Valley stifles doom: Technologists have been ringing alarm bells for years about the potential for AI to cause catastrophic damage. But in 2024, those warning calls were drowned out. OpenAI losing money: OpenAI CEO Sam Altman said that the company is currently losing money on its $200-per-monthChatGPT Proplan because people are using it more than the company expected. Record generative AI funding: Investments in generative AI, which encompasses a range of AI-powered apps, tools, and services to generate text, images, videos, speech, music, and more, reached new heights last year.Microsoft ups data center spending: Microsoft has earmarked $80 billion in fiscal 2025 to build data centers designed to handle AI workloads.Grok 3 MIA: xAIs next-gen AI model, Grok 3, didnt arrive on time, adding to a trend of flagship models that missed their promised launch windows. Research paper of the weekAI might make a lot of mistakes. But it can also supercharge experts in their work.At least, thats the finding of a team of researchers hailing from the University of Chicago and MIT. In a new study, they suggest that investors who use OpenAIs GPT-4o to summarize earnings calls realize higher returns than those who dont.The researchers recruited investors and had GPT-4o give them AI summaries aligned with their investing expertise. Sophisticated investors got more technical AI-generated notes, while novices got simpler ones. The more experienced investors saw a 9.6% improvement in their one-year returns after using GPT-4o, while the less experienced investors saw a 1.7% boost. Thats not too shabby for AI-human collaboration, Id say.Model of the weekMETAGENE-1s performance on various benchmarks.Image Credits:Prime IntellectPrime Intellect, a startup buildinginfrastructure for decentralized AI system training, has released an AI model that it claims can help detect pathogens.The model, called METAGENE-1, was trained on a dataset of over 1.5 trillion DNA and RNA base pairs sequenced from human wastewater samples. Created in partnership with the University of Southern California and SecureBios Nucleic Acid Observatory, METAGENE-1 can be used for various metagenomic applications, Prime Intellect said, like studying organisms.METAGENE-1 achieves state-of-the-art performance across various genomic benchmarks and new evaluations focused on human-pathogen detection, Prime Intellect wrote in a series of posts on X. After pretraining, this model is designed to aid in tasks in the areas of biosurveillance, pandemic monitoring, and pathogen detection.Grab bagIn response to legal action from major music publishers, Anthropic has agreed to maintain guardrails preventing its AI-powered chatbot, Claude, from sharing copyrighted song lyrics.Labels, including Universal Music Group, Concord Music Group, and ABKCO, sued Anthropic in 2023, accusing the startup of copyright infringement for training its AI systems on lyrics from at least 500 songs. The suit hasnt been resolved, but for the time being, Anthropic has agreed to stop Claude from providing lyrics to songs owned by the publishers and creating new song lyrics based on the copyrighted material.We continue to look forward to showing that, consistent with existing copyright law, using potentially copyrighted material in the training of generative AI models is a quintessential fair use, Anthropic said in a statement.
0 Reacties
0 aandelen
40 Views