WWW.ARTIFICIALINTELLIGENCE-NEWS.COM
RAGEN: AI framework tackles LLM agent instability
Researchers have introduced RAGEN, an AI framework designed to counter LLM agent instability when handling complex situations. Training these AI agents presents significant hurdles, particularly when decisions span multiple steps and involve unpredictable feedback from the environment. While reinforcement learning (RL) has shown promise in static tasks like solving maths problems or generating code, its application to dynamic, multi-turn agent training has been less explored.    Addressing this gap, a collaborative team from institutions including Northwestern University, Stanford University, Microsoft, and New York University has proposed StarPO (State-Thinking-Actions-Reward Policy Optimisation). StarPO offers a generalised approach for training agents at the trajectory level (i.e. it optimises the entire sequence of interactions, not just individual actions.) Accompanying this is RAGEN, a modular system built to implement StarPO. This enables the training and evaluation of LLM agents, particularly focusing on their reasoning capabilities under RL. RAGEN provides the necessary infrastructure for rollouts, reward assignment, and optimisation within multi-turn, stochastic (randomly determined) environments. Minimalist environments, maximum insight To isolate the core learning challenges from confounding factors like extensive pre-existing knowledge or task-specific engineering, the researchers tested LLMs using RAGEN in three deliberately minimalistic, controllable symbolic gaming environments:    Bandit: A single-turn, stochastic task testing risk-sensitive symbolic reasoning. The agent chooses between options (like ‘Phoenix’ or ‘Dragon’ arms) with different, initially unknown, reward profiles. Sokoban: A multi-turn, deterministic puzzle requiring foresight and planning, as actions (pushing boxes) are irreversible. Frozen Lake: A multi-turn, stochastic grid navigation task where movement attempts can randomly fail, demanding planning under uncertainty. These environments allow for clear analysis of how agents learn decision-making policies purely through interaction.    Key findings: Stability, rollouts, and reasoning The study yielded three significant findings concerning the training of self-evolving LLM agents: The ‘Echo Trap’ and the need for stability A recurring problem observed during multi-turn RL training was dubbed the “Echo Trap”. Agents would initially improve but then suffer performance collapse, overfitting to locally rewarded reasoning patterns.  This was marked by collapsing reward variance, falling entropy (a measure of randomness/exploration), and sudden spikes in gradients (indicating training instability). Early signs included drops in reward standard deviation and output entropy.    To combat this, the team developed StarPO-S, a stabilised version of the framework. StarPO-S incorporates:    Variance-based trajectory filtering: Focusing training on task instances where the agent’s behaviour shows higher uncertainty (higher reward variance), discarding low-variance, less informative rollouts. This improved stability and efficiency.    Critic incorporation: Using methods like PPO (Proximal Policy Optimisation), which employ a ‘critic’ to estimate value, generally showed better stability than critic-free methods like GRPO (Group Relative Policy Optimisation) in most tests.    Decoupled clipping and KL removal: Techniques adapted from other research (DAPO) involving asymmetric clipping (allowing more aggressive learning from positive rewards) and removing KL divergence penalties (encouraging exploration) further boosted stability and performance.    StarPO-S consistently delayed collapse and improved final task performance compared to vanilla StarPO.    Rollout quality is crucial The characteristics of the ‘rollouts’ (simulated interaction trajectories used for training) significantly impact learning. Key factors identified include:    Task diversity: Training with a diverse set of initial states (prompts), but with multiple responses generated per prompt, aids generalisation. The sweet spot seemed to be moderate diversity enabling contrast between different outcomes in similar scenarios.    Interaction granularity: Allowing multiple actions per turn (around 5-6 proved optimal) enables better planning within a fixed turn limit, without introducing the noise associated with excessively long action sequences.    Rollout frequency: Using fresh, up-to-date rollouts that reflect the agent’s current policy is vital. More frequent sampling (approaching an ‘online’ setting) leads to faster convergence and better generalisation by reducing policy-data mismatch. Maintaining freshness, alongside appropriate action budgets and task diversity, is key for stable training.    Reasoning requires careful reward design Simply prompting models to ‘think’ doesn’t guarantee meaningful reasoning emerges, especially in multi-turn tasks. The study found: Reasoning traces helped generalisation in the simpler, single-turn Bandit task, even when symbolic cues conflicted with rewards.    In multi-turn tasks like Sokoban, reasoning benefits were limited, and the length of ‘thinking’ segments consistently declined during training. Agents often regressed to direct action selection or produced “hallucinated reasoning” if rewards only tracked task success, revealing a “mismatch between thoughts and environment states.” This suggests that standard trajectory-level rewards (often sparse and outcome-based) are insufficient.  “Without fine-grained, reasoning-aware reward signals, agent reasoning hardly emerge[s] through multi-turn RL.” The researchers propose that future work should explore rewards that explicitly evaluate the quality of intermediate reasoning steps, perhaps using format-based penalties or rewarding explanation quality, rather than just final outcomes.    RAGEN and StarPO: A step towards self-evolving AI The RAGEN system and StarPO framework represent a step towards training LLM agents that can reason and adapt through interaction in complex, unpredictable environments. This research highlights the unique stability challenges posed by multi-turn RL and offers concrete strategies – like StarPO-S’s filtering and stabilisation techniques – to mitigate them. It also underscores the critical role of rollout generation strategies and the need for more sophisticated reward mechanisms to cultivate genuine reasoning, rather than superficial strategies or hallucinations. While acknowledging limitations – including the need to test on larger models and optimise for domains without easily verifiable rewards – the work opens “a scalable and principled path for building AI systems” in areas demanding complex interaction and verifiable outcomes, such as theorem proving, software engineering, and scientific discovery. (Image by Gerd Altmann) Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here.
0 Commenti 0 condivisioni 47 Views