Skip to yearly menu bar Skip to main content


Poster

On Stationary Point Convergence of PPO-Clip

Ruinan Jin · Shuai Li · Baoxiang Wang

Halle B #294
[ ]
Wed 8 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract:

Proximal policy optimization (PPO) has gained popularity in reinforcement learning (RL). Its PPO-Clip variant is one the most frequently implemented algorithms and is one of the first-to-try algorithms in RL tasks. This variant uses a clipped surrogate objective function not typically found in other algorithms. Many works have demonstrated the practical performance of PPO-Clip, but the theoretical understanding of it is limited to specific settings. In this work, we provide a comprehensive analysis that shows the stationary point convergence of PPO-Clip and the convergence rate thereof. Our analysis is new and overcomes many challenges, including the non-smooth nature of the clip operator, the potentially unbounded score function, and the involvement of the ratio of two stochastic policies. Our results and techniques might share new insights into PPO-Clip.

Live content is unavailable. Log in and register to view live content