From Training Language Models to Training DeepSeek-R1
towardsai.net
From Training Language Models to Training DeepSeek-R1 0 like February 17, 2025Share this postAuthor(s): Akhil Theerthala Originally published on Towards AI. Reasoning Models #1 An overview of trainingFrom RNNs to LLMs, a comprehensive overview of how training regimes changed.This member-only story is on us. Upgrade to access all of Medium.You probably already understand the potential of reasoning models. Playing around with O1 or DeepSeek-R1 shows us these models enormous promise. As enthusiasts, we are all curious to build something like these models.We all start on this path, too. However, from the sheer scale of things, we get overwhelmed by where we can start. Rightfully so, earlier, around 67 years ago, we only needed an input and output to train a module. As someone who builds those models, we know that getting these two things right is hard. However, things are way more complex now. We need additional task-specific data for every task we do.As an enthusiast, I want to dig deeper into these reasoning models and learn what they are and how they work. As a part of this process, I also plan to share everything Ive learned as a series of articles to get a chance to discuss these topics with like-minded folks. So, please keep commenting and sharing your thoughts as you read this article.Without delay, Id like to dive into todays topic the Read the full blog for free on Medium.Join thousands of data leaders on the AI newsletter. Join over 80,000 subscribers and keep up to date with the latest developments in AI. From research to projects and ideas. If you are building an AI startup, an AI-related product, or a service, we invite you to consider becoming asponsor. Published via Towards AITowards AI - Medium Share this post
0 Comments ·0 Shares ·61 Views