• For Chimpanzees, Peeing May Be ContagiousJust Like Yawning Is for Humans, Study Finds
    www.smithsonianmag.com
    For Chimpanzees, Peeing May Be ContagiousJust Like Yawning Is for Humans, Study FindsScientists suggest captive chimpanzees engage in socially contagious urinationthat is, when one primate starts peeing, others quickly follow suit A new study on "contagious urination" only looked at captive chimpanzees, but researchers suspect the phenomenon may also exist in the wild. Kumamoto SanctuaryIf you see or hear someone yawn, you might suddenly feel the urge to do the same, thanks to a well-studied phenomenon known as contagious yawning. Now, new research suggests urination may function in a similar way: Captive chimpanzees that saw their peers peeing were more likely to take a tinkle themselves.Scientists describe their evidence for this socially contagious urination in a new paper published Monday in the journal Current Biology.Study co-author Ena Onishi, a primatologist at Kyoto University, first became interested in chimp urination in 2019. While researching captive chimpanzees at the Kumamoto Sanctuary in Kyoto, Japan, she noticed that the animals all tended to pee at the same time.This observation reminded her of certain human behaviorsincluding contagious yawning and the tendency for people to go to the bathroom in groups.In Japan, my home country, there is a specific term called Tsureshon, which refers to the act of urinating in the company of others, Onishi tells Science News Gennaro Tomma.In addition to Tsureshon, the researchers also point to an Italian proverb that says whoever doesnt pee in company is either a thief or a spy.So, Onishi decided to conduct an experiment to learn more. She and her colleagues recorded the sanctuarys 20 chimpanzees for more than 600 hours, capturing 1,328 urination events in their footage.After analyzing the videos, the team confirmed their suspicions: When one chimp started peeing, others quickly followed suit.Their data also revealed more nuanced findings: For instance, chimpanzees with a lower social rank were more likely to urinate when they saw their peers peeing. Physical proximity to the initial urinator also increased the likelihood that other chimps would follow suit.Social closenessor how tightly bonded a pair of chimps seem to be, based on how much time they spend grooming or hanging out with each otherdid not appear to influence the contagious urination. Thats a departure from contagious yawning in humans, which does seem to be affected by social closeness.Since humans are known to visit the restroom together, the findings suggest contagious urination may have a deep evolutionary origin, study co-author Shinya Yamamoto, also a primatologist at Kyoto University, tells Live Sciences Olivia Ferrari. The behavior may even trace back to a shared ancestor. (Chimpanzees, along with bonobos, are humans closest living relatives.)In humans, we know that our decision to urinate is influenced by social contexts that lead us to urinate simultaneously with others, and that this simultaneous urination could also promote further social bonding, Yamamoto adds. Our study with chimpanzees clearly shows that they share some similarities in this phenomenon.Though the study only included captive chimpanzees, researchers suspect contagious urination probably exists in the wild, too. Future studies might investigate the behavior among wild chimps, as well as among other social species.If you walk with great apes in the wild, you often see that group members really coordinate what theyre doing, says Martin Surbeck, an evolutionary biologist at Harvard University who was not involved with the paper, to the New York Times Annie Roth.Why do chimps seem to pee at the same time? The study doesnt answer that question definitively, but the researchers have developed a few theories. Contagious urination might help reinforce social bonds and boost cohesiveness, or it might be a defensive move to prevent predators from tracking the groups movements.Humans and non-human animals share many social phenomena linked to group livingwere all influenced by the presence of others, even in everyday activities, Onishi tells Salons Matthew Rozsa. For instance, behaviors like yawning, walking, rhythmic tapping and even changes in pupil size are contagious in both humans and chimpanzees. Our study fits into this framework by showing that urination, a seemingly simple physiological act, can also spread socially within a group.Get the latest stories in your inbox every weekday.Filed Under: Animals, Evolution, Human Evolution, Human Origins, Japan, Mammals, New Research, Primates, Social Sciences, wildlife
    0 Commentarii ·0 Distribuiri ·139 Views
  • Bob Dylans Drafts of 'Mr. Tambourine Man' Lyrics Sell for $508,000 at Auction
    www.smithsonianmag.com
    Bob Dylans Drafts of Mr. Tambourine Man Lyrics Sell for $508,000 at AuctionThe rare papers were part of a larger collection from rock journalist Al Aronowitz, a close friend of Dylans in the 1960sElla JeffriesStaff ContributorJanuary 21, 2025 4:25 p.m. The two sheets of yellowed paper contain three typewritten drafts of the iconic song. Julien's AuctionsBob Dylans legendary Mr. Tambourine Man, one of the defining folk-rock tracks of the 1960s, has once again captured the spotlightbut this time, its not through a song recording.The original drafts of the songs lyrics, which offer a rare glimpse into Dylans creative process, have been sold for $508,000 at a recent auction. The sale, held by Juliens Auctions in Nashville, adds to the heightened interest in Dylans legacy, especially after the recent release of A Complete Unknown, a biopic chronicling Dylans rise to fame in 1960s New York.The two sheets of yellowed paper that sold for over half a million dollars contain three typewritten drafts of the iconic song. These drafts are not the final version, but offer unique insights into Dylans songwriting methods. Handwritten notes and changes in the margins show the evolution of the lyrics, with one draft even nearing the final version, though still featuring significant variations. For fans and experts alike, these drafts present an opportunity to see how one of the 20th centurys most influential songwriters shaped his work.Its absolutely mind-blowing, and confirmation that this is how genius works, Richard Thomas, a Harvard University classics scholar who also teaches a course on Dylans writing, told Ali Watkins of the New York Times.Bob Dylan - Mr. Tambourine Man (Official Audio)Watch on The drafts were part of the personal collection of Al Aronowitz, a renowned rock journalist who was a close confidant of Dylans in the 1960s. Dylan wrote Mr. Tambourine Man in early 1964 at Aronowitzs home in Berkeley Heights, New Jersey, where he spent a night at the journalist's breakfast bar, writing away on a portable typewriter while listening to Marvin Gayes Can I Get A Witness.Aronowitz later recalled that, after Dylan left, he found a wastebasket filled with crumpled pages, the discarded drafts of the song. But after Aronowitzs death in 2005, his family couldnt locate the lyrics and believed the drafts were lost. His son, Myles Aronowitz, who played a key role in finding the lyric pages, said the discovery came after years of searching through family archives.This was family lore, Myles told David Browne of Rolling Stone in December. My father talked about it, but he had no idea where they were. He thought he lost them or someone stole them. It took us years going through the archives folder by folder to find them.In total, the Aronowitz archive sold for $1.5 million, with other items fetching impressive sums. Among the highlights were a 1983 Fender Telecaster owned by Dylan, which sold for $222,250, and an original 1968 oil painting by Dylan, which went for $260,000.My family and I are thrilled with the auction, Myles said in a statement, per Daniel Kreps of Rolling Stone. These items were evidence of the unique and intimate place my father had in musical and cultural history with his good friend Bob Dylan, and all the other iconic artists of his day.Myles and his wife hope to organize another auction, and eventually place the entire collection in a library or museum, according to Ali Watkins of the New York Times.Dylans Mr. Tambourine Man, which was eventually released in 1965 on his album Bringing It All Back Home, became a landmark song in the folk-rock genre. While the Byrds 1965 cover of the song was a chart-topping hit, Dylans version would go on to be one of his most celebrated tracks.Get the latest stories in your inbox every weekday.Filed Under: Auctions, Music, Musicians, Pop culture, Rock and Roll, Rock Musicians
    0 Commentarii ·0 Distribuiri ·138 Views
  • Tencent introduces Hunyuan3D 2.0, AI that speeds up 3D design from days to seconds
    venturebeat.com
    Tencent's Hunyuan3D 2.0 transforms images into detailed 3D models in seconds. This could reshape how industries create virtual content.Read More
    0 Commentarii ·0 Distribuiri ·136 Views
  • How Axis Security is using Xpander.AIs agent platform to supercharge customer support ticket management
    venturebeat.com
    Through its partnership with Xpander.AI, Axis has managed to save thousands of hours monthly and grow its team sustainably.Read More
    0 Commentarii ·0 Distribuiri ·148 Views
  • Indie blockbuster Balatro tops 5m units sold | News-in-brief
    www.gamesindustry.biz
    Indie blockbuster Balatro tops 5m units sold | News-in-briefDeveloper invites players to "make a habit" of playing indie gamesImage credit: LocalThunk News by Vikki Blake Contributor Published on Jan. 21, 2025 This is a News-in-brief article, our short format linking to an official source for more information. Read more about this story by following the link below:Indie blockbuster Balatro tops 5m units sold
    0 Commentarii ·0 Distribuiri ·134 Views
  • Corinne Busche, Dragon Age: The Veilguard director, leaves BioWare and EA after 18-year stint
    www.gamedeveloper.com
    Justin Carter, Contributing EditorJanuary 21, 20251 Min ReadImage via BioWare/EA.At a GlanceAfter helping 'right the ship' at BioWare, Busche has joined another studio that had 'an opportunity I couldn't turn down.'Dragon Age: The Veilguard director Corinne Busche has departed BioWare and EA. In a statement to Eurogamer, she explained last week that she was "presented with an opportunity I couldn't turn down."Bushce first joined EA in 2006 as a designer, and later worked on The Sims 3's Into the Future DLC. Beginning in 2019, she joined BioWare as a lead systems designer until she gradually worked up to being a game director on Veilguard, which released last October."Righting the ship" at BioWareIn her statement, Busche stated her exit was voluntary, and she left having done "what I set out to do at BioWare, to come in and help right the ship. At the heart of it, this was about my own fulfillment." Her comments reflect Veilguard's initial beginnings as a live-service game made over several years later converted into a single-player affair as BioWare re-prioritized Dragon Age and Mass Effect after Anthem's collapse in 2021."The chance to return [Dragon Age] to a proper quality single player RPG was the privilege of a lifetime," she continued. "It was hard fought, as games with such tumultuous dev cycles rarely end up shipping, and even more rarely turn out great. We, as a team, did it. And it was hard. It took a toll on me. BioWare still has a lot of work to do culturally, but I do believe they are on the right footing now."Speaking to her next role, Busche affirmed she would remain "in the CRPG space and upholding the traditions of great characters."Read more about:EAAbout the AuthorJustin CarterContributing Editor, GameDeveloper.comA Kansas City, MO native, Justin Carter has written for numerous sites including IGN, Polygon, and SyFy Wire. In addition to Game Developer, his writing can be found at io9 over on Gizmodo. Don't ask him about how much gum he's had, because the answer will be more than he's willing to admit.See more from Justin CarterDaily news, dev blogs, and stories from Game Developer straight to your inboxStay UpdatedYou May Also Like
    0 Commentarii ·0 Distribuiri ·134 Views
  • Trump says hes open to Musk or Ellison buying TikTok
    www.theverge.com
    President Donald Trump says hed be open to his buddies Elon Musk or Larry Ellison buying TikTok.Larry, lets negotiate in front of the media, Trump said at a press conference with the Oracle co-founder, SoftBank CEO Masa Son, and OpenAI CEO Sam Altman to announce a $500 billion artificial intelligence infrastructure investment. What Im thinking about saying to somebody is, buy it, and give half to the United States of America. Half, and well give you the permit. And theyll have a great partner, the United States.Sounds like a good deal to me, Mr. President, Ellison said.Its still not entirely clear how all of this would work, or how the US could legally operate a speech platform without violating the First Amendment. But its one of the earliest examples of how Silicon Valleys coziness with Trump could manifest over the next four years. Trump signed an executive order on Monday instructing his administration not to enforce the law on service providers covered by the forced divestiture bill which include Oracle, Apple, and Google for 75 days. But legal experts say the action provides hardly any legal cover for those companies to violate federal law and risk $850 billion in penalties. Even so, Oracle has appeared to rely on Trumps assurances to help TikTok run in the US after the January 19th sale deadline, though the company has not yet commented on it directly. TikToks China-based parent company ByteDance still has other offers on the table, including from billionaire Frank McCourts Project Liberty and now, apparently, from YouTube creator MrBeast whose investor group is receiving legal counsel from a team that includes the brother of Trumps attorney general pick.As he was leaving the briefing, a reporter asked Trump if he has TikTok on his phone. No, but I think I might put it there, Trump responded. I think Ill get it right now.
    0 Commentarii ·0 Distribuiri ·126 Views
  • Microsoft is letting OpenAI get its own AI compute now
    www.theverge.com
    Microsoft and OpenAI announced Tuesday that they have adjusted their partnership so that OpenAI can access competitors' compute. The new agreement includes changes to the exclusivity on new capacity, moving to a model where Microsoft has a right of first refusal (ROFR), Microsoft says. To further support OpenAI, Microsoft has approved OpenAIs ability to build additional capacity, primarily for research and training of models.The foundation of their relationship (which runs through 2030) stays pretty much the same Microsoft keeps its exclusive rights to OpenAIs tech for products like Copilot, and OpenAIs API remains exclusive to Azure. Theyll maintain their two-way revenue-sharing setup (it's been reported that Microsoft gets 20 percent of OpenAIs revenue). Prior to todays change, OpenAI was locked into using Microsofts Azure cloud infrastructure exclusively for its computing needs. The news follows the announcement of a joint venture between Arm, Microsoft, Nvidia, Oracle, and OpenAI to build a system of data centers in the U.S. called Starbase.The models OpenAI hopes to build and the user base it's looking to serve require billions of dollars in compute. It has been previously reported that some OpenAI shareholders felt Microsoft wasnt moving fast enough to supply OpenAI with computing power, hence why the startup partnered with Oracle back in June (with the blessing of Microsoft) for the necessary compute.Theres been a lot of buzz about Microsoft and OpenAI facing relationship woes after OpenAI CEO Sam Altman was briefly ousted from the company, causing a lot of very public drama. The New York Times reported that the relationship has grown increasingly strained due to financial pressures at OpenAI, concerns about stability, and growing friction between employees at both companies.Last March, Microsoft hired Inflection CEO Mustafa Suleyman to lead its consumer AI efforts, along with most of Inflections staff, in a $650 million deal. According to The New York Times report, this move particularly angered some OpenAI leadership, including Altman.OpenAIs deal with Microsoft also has an unusual escape clause: if OpenAI creates artificial general intelligence (AGI), it could close off Microsofts access to some of its most powerful models developed after that point. AGI, reportedly, is defined as a system capable of generating more than $100 billion in profits. This was originally meant to keep such powerful AI from being commercialized, but now OpenAI is reportedly considering dropping this provision, likely to secure more Microsoft funding.
    0 Commentarii ·0 Distribuiri ·118 Views
  • TAI #136: DeepSeek-R1 Challenges OpenAI-o1 With ~30x Cheaper Open-Source Reasoning Model
    towardsai.net
    Author(s): Towards AI Editorial Team Originally published on Towards AI. What happened this week in AI by LouieThis week, the LLM race was blown wide open with Deepseeks open-source release of R1. Performance is close to o1 in most benchmarks. Built on top of DeepSeeks v3 model, R1 API output token prices are 30x less than o1. Its available under the MIT license, supporting commercial use and modifications. Deepseek also disclosed many of its methods and experiments in its paper, in stark contrast to the secrecy surrounding reasoning techniques at AI labs in the U.S.R1 wasnt the only huge LLM release from China this week. Two new LLM competitors hit the ground running with very strong models. MiniMax-01, a 456bn parameter Mixture of Experts Model, challenges Googles Gemini models for SoTA in long context capabilities. It offers 4 million input context due to its new Lightning Attention (hybrid) architecture. Kimi-1.5, on the other hand, is another new reasoning model that challenges o1 on multimodal capabilities.Deepseeks release included three different models/ model families:DeepSeek-R1-Zero was an experiment that applied reinforcement learning (RL) directly to a base language model (V3) without any prior supervised fine-tuning. In essence, they attempted to teach the model to reason purely through trial and error, providing it with rewards for correct answers and well-formatted responses. This is somewhat analogous to how AlphaZero mastered games like Go and chess, learning solely through self-play and a reward signal based on winning or losing. The results were very impressive on many benchmarks; however, it fell short in some fields, and the models output was often messy and hard to read.To address the limitations of R1-Zero and enhance its reasoning abilities further, the DeepSeek team introduced R1, which incorporated a cold start of human-like reasoning data before applying reinforcement learning. This involved creating a small dataset of examples demonstrating desired reasoning patterns and output formats. This was followed by a multi-stage process. First, reasoning-oriented RL was applied, focusing on tasks with clear solutions, like math and coding. Then, they generated a new batch of high-quality data samples for fine-tuning, created by filtering model outputs during the RL phase. Finally, they applied a final round of reinforcement learning, this time focusing on general helpfulness and harmlessness in addition to reasoning.Across key benchmarks like AIME 2024, Codeforces, GPQA Diamond, and MATH-500, DeepSeek-R1 consistently performs on par with OpenAIs o1 (79.8 vs. 79.2, 96.3 vs. 96.6, 71.5 vs. 75.7, and 97.3 vs. 96.4, respectively). They also got very similar performance on the SWE-bench Verified coding challenge (49.2 vs 48.9).The final piece of DeepSeeks work involved distilling the advanced reasoning capabilities of R1 into smaller, cheaper, dense models (Llama and Qwen series). Using the larger R1 model as a teacher, they fine-tuned several smaller models (ranging from 1.5B to 70B parameters) on the high-quality data curated from the R1 training process. The smaller distilled models significantly outperformed other models of similar sizes and even rivaled much larger models on reasoning benchmarks. DeepSeek-R1 outputs distilled into the tiny Qwen-1.5B even beat 4o on some math and code benchmarks!Why should you care?DeepSeek-R1s release is significant for several reasons. First, its open-source nature and competitive performance at a fraction of the cost of o1 democratizes access to advanced reasoning capabilities. The API costs of DeepSeek-R1 per million tokens are currently $0.14 for cached inputs, $0.55 for non-cached inputs, and $2.19 for outputs. In contrast, the API costs for o1 are respectively $7.5, $15, and $60. About a x30 difference in costs! Moreover, the open model weights open up huge opportunities for adapting and fine-tuning these models for different domains and industries. The open release of its training methods also provides a blueprint for many others to follow. One surprise from the paper was that simpler techniques for enabling reasoning abilities worked better than some more complex options. We think there is a huge area for exploring and experimenting with these techniques now that scaled reinforcement learning for LLMs has been unlocked!The huge success shown by distilling big reasoning models into much smaller non-reasoning models also suggests we will get another wave of rapid improvement and cost reduction across the LLM spectrum.The fact a Chinese company is leading this charge also adds a geopolitical dimension, particularly given that Deepseek has managed to achieve this despite GPU export restrictions and a far smaller budget than Western AI labs.Introducing Our Brand New 8-hour Generative AI Primer CourseA programming language-agnostic 1-day LLM Bootcamp designed for developers.95% of developers I meet are only scratching the surface of what LLMs can do. When working with LLMs, you are CONSTANTLY making decisions such as open-source vs. closed-source, how to fit LLMs into your use case, whether no-code solutions are good enough for your workflow, the extent to which consider the limitations of LLMs, and so on. And the biggest gap we see on top of all this is whether you are using LLMs to their full capacity, even with chat interfaces like ChatGPT or APIs for models like Gemini. The question is: are you?This certification course is specifically designed to cut through the noise, help you ask the right questions, and show you exactly how to find answers. LLMs are moving so fast, with updates being released almost every day; what you need is an intuitive framework, and just like LLMs, you need enough context to know what developments are relevant to you and your use case so you can make the most out of this transformative technology.In just 8 hours, through lessons, videos, exercises, quizzes, and hands-on projects, youll:Dive deep into the psyche of LLMs: how they work, how to make them work better, and how to train them for tasks you hate doing.Work with leading AI models and integrate them into your workflows seamlessly.Build your own no-code/low-code prototype that brings your ideas to life.Youll finish before you even realize it, and by tomorrow, youll already be AI-proofed. Secure your spot now!Hottest News1. OpenAI Released Scheduled Tasks in ChatGPTOpenAI has introduced scheduled tasks in ChatGPT for Plus, Pro, and Team plans. These allow automated prompts and notifications on the Web, iOS, Android, and MacOS. Users can assign tasks like daily updates or reminders and receive notifications via push or email. Windows support will follow in Q1. Currently, a limit of 10 active tasks is enforced.2. Chinese AI Company MiniMax Releases New ModelsChinese AI company MiniMax, an Alibaba- and Tencent-backed startup, debuted three new models. MiniMax-Text-01 is a text-only model, while MiniMax-VL-01 can understand images and text. T2A-01-HD, meanwhile, generates audio specifically speech. MiniMax claims that MiniMax-Text-01 performs better than models such as Gemini 2.0 Flash and MiniMax-VL-01 rivals Claude 3.5 Sonnet.3. Kimi Launches New SOTA Multimodal ModelBeijing Moonlit Dark Side Technology introduced the new Kimi k1.5 multimodal thinking model. Updates include long context extension, improved policy optimization, and multimodality. Its report shows their Sota short-CoT performance outperforms GPT-4o and Claude Sonnet 3.5 on AIME, MATH-500, and LiveCodeBench by a large margin.4. Alibaba Slashes Prices on LLMs by Up to 85% As Chinas AI Rivalry Heats UpAlibaba Cloud announced an 85% price reduction on its Qwen-VL visual language model. The move demonstrates how competition among Chinas technology giants to win more business for their nascent artificial intelligence products is intensifying.5. Google Is Forming a New Team To Build AI That Can Simulate the Physical WorldGoogle is forming a new team led by Tim Brooks under DeepMind to build AI models for simulating the physical world, collaborating with Gemini, Veo, and Genie teams on world models. These models aid in video generation, multimodal data, and interactive environments.6. Mistral Signs Deal With AFP To Offer Up-to-Date Answers in Le ChatMistral has announced a content deal with newswire Agence France-Presse (AFP) to improve the accuracy of answers in Le Chat, Mistrals chatbot. Le Chat will be able to tap into AFPs stories around 2,300 stories per day in six languages and query AFPs entire archive dating back to 1983.7. President Trump Repeals Bidens AI Executive OrderPresident Donald Trump revoked a 2023 executive order signed by former President Joe Biden that sought to reduce the potential risks AI poses to consumers, workers, and national security. During his campaign, Trump promised policies to support AI development rooted in free speech and human flourishing.Five 5-minute reads/videos to keep you learning1. Retrieval-Augmented Generation (RAG) vs. Cache-Augmented Generation (CAG): A Deep Dive Into Faster, Smarter Knowledge IntegrationRetrieval-augmented generation (RAG) and cache-augmented generation (CAG) are two methodologies for generating more context-aware responses from LLMs. This article provides an extensive, step-by-step guide on both approaches, dives into their workflows, compares their advantages and drawbacks, and offers an implementation guide for CAG.2. Why AI Language Models Choke On Too Much TextGPUs revolutionized AI by enabling massive parallel processing, leading to transformer models scaling rapidly. Despite advancements, transformers remain inefficient with long contexts due to quadratic compute costs. This article discusses why this happens and shares some approaches to solving this problem.3. Simplifying Alignment: From RLHF To Direct Preference Optimization (DPO)This article explores how Direct Preference Optimization (DPO) simplifies aligning large language models with human preferences over Reinforcement Learning with Human Feedback (RLHF). It breaks down the math and highlights why DPO might be the smarter, easier way forward.4. Mastering Data Scaling: The Only Guide Youll Ever Need (Straight From My Journey)Data scaling is a crucial step in ensuring optimal model function. It prepares datasets for machine learning models. This article discusses why scaling is important, its types, and how and when to apply it.5. Takes On Alignment Faking in Large Language ModelsResearchers revealed that Claude 3 Opus fakes alignment with training objectives to avoid behavioral modification a phenomenon labeled alignment faking. This author shares their take on the results.Repositories & ToolsThe micro diffusion repository demonstrates the training of large-scale diffusion models from scratch on a minimal budget.LocalAI is a free, open-source alternative to OpenAI, Claude, and others.Maxun lets you train a robot in 2 minutes and scrape the web on auto-pilot.Agentless is an agentless approach to automatically solve software development problems.CopilotKit provides React UI and infrastructure for AI Copilots, in-app AI agents, AI chatbots, and more.Top Papers of The Week1. LlamaV-o1: Rethinking Step-by-step Visual Reasoning in LLMsLlamaV-o1 redefines step-by-step visual reasoning in large language models by introducing a benchmark with eight challenge categories and a metric for granular evaluation. The multimodal model, trained through multi-step curriculum learning, surpasses existing models like Llava-CoT by 3.8% in performance across six benchmarks and runs five times faster during inference.2. KaLM-Embedding: Superior Training Data Brings A Stronger Embedding ModelResearchers developed KaLM-Embedding, a multilingual embedding model using high-quality, diverse training data. Techniques like persona-based synthetic data, ranking consistency filtering, and semi-homogeneous task batch sampling enhance its performance. The model excels in multilingual embedding tasks, outperforming others of similar size on the MTEB benchmark.3. Titans: Learning to Memorize at Test TimeThis paper introduces a new family of architecture called Titans based on a new neural long-term memory module. The module learns to memorize historical context and helps attention to attend to the current context while utilizing long-past information. Experimental results show that Titans are more effective than Transformers and recent modern linear recurrent models.4. Transformer 2: Self-adaptive LLMsThis paper introduces Transformer 2, a framework that adapts LLMs for unseen tasks in real-time by selectively adjusting only the singular components of their weight matrices. During inference, Transformer 2 employs a dispatch system to identify the task properties, and then task-specific expert vectors, trained using reinforcement learning, are dynamically mixed to obtain targeted behavior for the incoming prompt. It outperforms approaches such as LoRA with fewer parameters.Quick Links1. Six charts about AI revenue. OpenAI captures approximately 62.5% of consumer AI spending. xAIs revenue jumped from $5M to $100M, while OpenAI soared from $200M to $5B. Sapphire Ventures reports 28 AI-native companies exceeding $25MM in ARR, predicting substantial growth for AI-native startups in the coming year.2. DeepSeek-R1 achieves performance comparable to OpenAIs o1 system across mathematics, coding, and general reasoning tasks, cementing its place as a leading competitor. DeepSeek has open-sourced DeepSeek-R1-Zero and DeepSeek-R1, along with six smaller distilled models.Whos Hiring in AIApplied AI Engineer, Applied Science @Mistral AI (Paris, France)Cambridge Internship in ML Model Optimization @Microsoft Corporation (Cambridge, United Kingdom)Machine Learning Software Engineering Undergraduate Intern @INTEL (Santa Clara, CA, USA)Tech Consulting AI LLM Developer Manager @Accenture (Multiple Locations)Full-Stack Developer (React + Python + Azure) @Solvd (Remote)GenAI/Machine Learning Technical Project Manager @Deloitte (Multiple US Locations)Interested in sharing a job opportunity here? Contact [emailprotected].Think a friend would enjoy this too? Share the newsletter and let them join the conversation.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AI
    0 Commentarii ·0 Distribuiri ·130 Views
  • Sonic the Hedgehog 4 Release Date Announced
    www.ign.com
    Sonic the Hedgehog 4 is set to hit theaters on March 2027.According to Variety, Paramount has scheduled the next Sonic movie to hit the big screen on March 19, 2027 give us two years until we see the blue blur back in action. No further details on the next Sonic movie have been released beyond the date.This seems like a no-brainer after the most recent film in the series, Sonic the Hedehog 3, made $218 million at the domestic box office and over $420 million worldwide. It is officially the highest-grossing Sonic film in the franchise after the first film recorded a healthy $148 million. Especially given the controversy surrounding the original Sonic design, that was later changed heavily in post-production.Sonic the Hedgehog 3 also has the honor of being the second highest-grossing video game movie of all time in North America behind only the animated Super Mario Bros Movie. Once again continuing the Nintendo and Sega rivalry on the big screen.PlayThe live-action Sonic franchise has grown steadily over the years and now includes three feature films as well as a Knuckles streaming TV show spinoff. Based on the hit Sega video game franchise, the films follow Sonic (voiced by Ben Schwartz) as he takes down his nemeis Dr. Robotnik played by Jim Carrey. Each new film has introduced more of the Sonic cast including Tails (Colleen O'Shaughnessey) and Knuckles (Idris Elba) with the most recent film finally introducing Shadow the Hedgehog (Keanu Reeves).Sonic 3 has already revealed the next character to join the franchse, though we won't spoil that here. You can instead read our new characters guide at your own peril. Be sure to also read our Sonic 3 review here.Matt Kim is IGN's Senior Features Editor.
    0 Commentarii ·0 Distribuiri ·124 Views