Skip to yearly menu bar Skip to main content


Spotlight Poster

Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback

Haolin Liu · Chen-Yu Wei · Julian Zimmert

Halle B #180
[ ]
Fri 10 May 7:30 a.m. PDT — 9:30 a.m. PDT

Abstract: We study online reinforcement learning in linear Markov decision processes with adversarial losses and bandit feedback. We introduce two algorithms that achieve improved regret performance compared to existing approaches. The first algorithm, although computationally inefficient, achieves a regret of $\widetilde{O}(\sqrt{K})$ without relying on simulators, where $K$ is the number of episodes. This is the first rate-optimal result in the considered setting. The second algorithm is computationally efficient and achieves a regret of $\widetilde{O}(K^{\frac{3}{4}})$ . These results significantly improve over the prior state-of-the-art: a computationally inefficient algorithm by Kong et al. (2023) with $\widetilde{O}(K^{\frac{4}{5}}+1/\lambda_{\min})$ regret, and a computationally efficient algorithm by Sherman et al. (2023b) with $\widetilde{O}(K^{\frac{6}{7}})$ regret.

Live content is unavailable. Log in and register to view live content