s1: A Simple Yet Powerful Test-Time Scaling Approach for LLMs
www.marktechpost.com
Language models (LMs) have significantly progressed through increased computational power during training, primarily through large-scale self-supervised pretraining. While this approach has yielded powerful models, a new paradigm called test-time scaling has emerged, focusing on improving performance by increasing computation at inference time. OpenAIs o1 model has validated this approach, showing enhanced reasoning capabilities through test-time compute scaling. However, replicating these results has proven challenging, with various attempts using techniques like Monte Carlo Tree Search (MCTS), multi-agent approaches, and reinforcement learning. Even models like DeepSeek R1 have used millions of samples and complex training stages, yet none have replicated the test-time scaling behavior in o1.Various methods have been developed to tackle the test-time scaling challenge. Sequential scaling approaches enable models to generate successive solution attempts, with each iteration building upon previous outcomes. Tree-based search methods combine sequential and parallel scaling, implementing techniques like MCTS and guided beam search. REBASE has emerged as a notable approach, utilizing a process reward model to optimize tree search through balanced exploitation and pruning, showing superior performance compared to sampling-based methods and MCTS. These approaches heavily rely on reward models, which come in two forms: outcome reward models for evaluating complete solutions in Best-of-N selection, and process reward models for assessing individual reasoning steps in tree-based search methods.Researchers from Stanford University, the University of Washington, the Allen Institute for AI, and Contextual AI have proposed a streamlined approach to achieve test-time scaling and enhanced reasoning capabilities. Their method centers on two key innovations: the carefully curated s1K dataset comprising 1,000 questions with reasoning traces, selected based on difficulty, diversity, and quality criteria, and a novel technique called budget forcing. This budget-forcing mechanism controls test-time computation by either cutting short or extending the models thinking process through strategic Wait insertions, enabling the model to review and correct its reasoning. The approach was implemented by fine-tuning the Qwen2.5-32B-Instruct language model on the s1K dataset.The data selection process follows a three-stage filtering approach based on quality, difficulty, and diversity criteria. The quality filtering stage begins by removing samples with API errors and formatting issues, reducing the initial dataset to 51,581 examples, from which 384 high-quality samples are initially selected. The difficulty assessment employs two key metrics: model performance evaluation using Qwen2.5-7B-Instruct and Qwen2.5-32B-Instruct models, with correctness verified by Claude 3.5 Sonnet, and reasoning trace length measured by the Qwen2.5 tokenizer. For diversity, questions are classified into specific domains using the Mathematics Subject Classification system through Claude 3.5 Sonnet. This comprehensive filtering process results in a final dataset of 1,000 samples spanning 50 domains.The s1-32B model demonstrates significant performance improvements through test-time compute scaling with budget forcing. s1-32B operates in a superior scaling paradigm compared to the base Qwen2.5-32B-Instruct model using majority voting, validating the effectiveness of sequential scaling over parallel approaches. Moreover, s1-32B emerges as the most efficient open data reasoning model in sample efficiency, showing marked improvement over the base model with just 1,000 additional training samples. While r1-32B achieves better performance it requires 800 times more training data. Notably, s1-32B approaches Gemini 2.0 Thinkings performance on AIME24, suggesting successful knowledge distillation.This paper shows that Supervised Fine-Tuning (SFT) with just 1,000 carefully selected examples can create a competitive reasoning model that matches the o1-previews performance and achieves optimal efficiency. The introduced budget forcing technique, when combined with the reasoning model, successfully reproduces OpenAIs test-time scaling behavior. The effectiveness of such minimal training data suggests that the models reasoning capabilities are largely present from pretraining on trillions of tokens, with the fine-tuning process merely activating these latent abilities. This aligns with the Superficial Alignment Hypothesis from LIMA research, suggesting that a relatively small number of examples can effectively align a models behavior with desired outcomes.Check outthePaper and GitHub Page.All credit for this research goes to the researchers of this project. Also,dont forget to follow us onTwitterand join ourTelegram ChannelandLinkedIn Group. Dont Forget to join our75k+ ML SubReddit. Sajjad AnsariSajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.Sajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Neural SpaceTimes (NSTs): A Class of Trainable Deep Learning-based Geometries that can Universally Represent Nodes in Weighted Directed Acyclic Graphs (DAGs) as Events in a Spacetime ManifoldSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Curiosity-Driven Reinforcement Learning from Human Feedback CD-RLHF: An AI Framework that Mitigates the Diversity Alignment Trade-off In Language ModelsSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/Optimization Using FP4 Quantization For Ultra-Low Precision Language Model TrainingSajjad Ansarihttps://www.marktechpost.com/author/sajjadansari/InternVideo2.5: Hierarchical Token Compression and Task Preference Optimization for Video MLLMs [Recommended] Join Our Telegram Channel
0 Comments
·0 Shares
·39 Views