Online Prediction of Stochastic Sequences with High Probability Regret Bounds
Matthias Frey · Jonathan Manton · Jingge Zhu
Abstract
We revisit the classical problem of universal prediction of stochastic sequences with a finite time horizon $T$ known to the learner. The question we investigate is whether it is possible to derive vanishing regret bounds that hold with high probability, complementing existing bounds from the literature that hold in expectation. We propose such high-probability bounds which have a very similar form as the prior expectation bounds. For the case of universal prediction of a stochastic process over a countable alphabet, our bound states a convergence rate of $\mathcal{O}(T^{-1/2} \delta^{-1/2})$ with probability as least $1-\delta$ compared to prior known in-expectation bounds of the order $\mathcal{O}(T^{-1/2})$. We also propose an impossibility result which proves that it is not possible to improve the exponent of $\delta$ in a bound of the same form without making additional assumptions.
Successful Page Load