Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

SGDA with shuffling: faster convergence for nonconvex-PŁ minimax optimization

Hanseul Cho · Chulhee Yun

MH1-2-3-4 #110

Keywords: [ Optimization ] [ Polyak-Łojasiewicz ] [ SGDA ] [ without-replacement sampling ] [ random reshuffling ] [ Minimax Optimization ]


Abstract:

Stochastic gradient descent-ascent (SGDA) is one of the main workhorses for solving finite-sum minimax optimization problems. Most practical implementations of SGDA randomly reshuffle components and sequentially use them (i.e., without-replacement sampling); however, there are few theoretical results on this approach for minimax algorithms, especially outside the easier-to-analyze (strongly-)monotone setups. To narrow this gap, we study the convergence bounds of SGDA with random reshuffling (SGDA-RR) for smooth nonconvex-nonconcave objectives with Polyak-{\L}ojasiewicz (P{\L}) geometry. We analyze both simultaneous and alternating SGDA-RR for nonconvex-P{\L} and primal-P{\L}-P{\L} objectives, and obtain convergence rates faster than with-replacement SGDA. Our rates extend to mini-batch SGDA-RR, recovering known rates for full-batch gradient descent-ascent (GDA). Lastly, we present a comprehensive lower bound for GDA with an arbitrary step-size ratio, which matches the full-batch upper bound for the primal-P{\L}-P{\L} case.

Chat is not available.