Towards High Data Efficiency in Reinforcement Learning with Verifiable Reward
Xinyu Tang · Zhenduo Zhang · Yurou Liu · Xin Zhao · zujie wen · Zhiqiang Zhang · JUN ZHOU
Abstract
Recent advances in large language models (LLMs) have utilized reinforcement learning with verifiable rewards (RLVR) to improve reasoning capabilities. However, scaling these methods typically requires massive data and extensive rollout computations, leading to high training costs and low data efficiency. To mitigate this issue, we propose DEPO, a Data-Efficient Policy Optimization approach that combines optimized strategies for both offline and online data selection. In the offline phase, we curate a high-quality subset of training data based on multiple objectives, including diversity, influence, and difficulty. During online RLVR training, we propose a sample-level explorability metric to dynamically filter out samples with low exploration potential, thereby reducing substantial rollout computational costs. Additionally, we employ a replay mechanism for under-explored samples to ensure sufficient training, which enhances the final convergence performance. Experiments on five reasoning benchmarks show that DEPO consistently outperforms existing methods in both offline and online data selection scenarios. Notably, using only 20% of the training data, our approach achieves a 1.85 $\times$ speed-up on AIME24 and a 1.66 $\times$ speed-up on AIME25 compared to GRPO trained on the full dataset.
Successful Page Load