Skip to yearly menu bar Skip to main content


In-Person Poster presentation / poster accept

Generalization Bounds for Federated Learning: Fast Rates, Unparticipating Clients and Unbounded Losses

Xiaolin Hu · Shaojie Li · Yong Liu

MH1-2-3-4 #161

Keywords: [ Theory ] [ generalization error ] [ Risk bound ] [ Unbounded losses ] [ learning theory ] [ federated learning ]


Abstract: In {federated learning}, the underlying data distributions may be different across clients. This paper provides a theoretical analysis of generalization error of {federated learning}, which captures both heterogeneity and relatedness of the distributions. In particular, we assume that the heterogeneous distributions are sampled from a meta-distribution. In this two-level distribution framework, we characterize the generalization error not only for clients participating in the training but also for unparticipating clients. We first show that the generalization error for unparticipating clients can be bounded by participating generalization error and participating gap caused by clients' sampling. We further establish fast learning bounds of order $\mathcal{O}(\frac{1}{mn} + \frac{1}{m})$ for unparticipating clients, where $m$ is the number of clients and $n$ is the sample size at each client. To our knowledge, the obtained fast bounds are state-of-the-art in the two-level distribution framework. Moreover, previous theoretical results mostly require the loss function to be bounded. We derive convergence bounds of order $\mathcal{O}(\frac{1}{\sqrt{mn}} + \frac{1}{\sqrt{m}})$ under unbounded assumptions, including sub-exponential and sub-Weibull losses.

Chat is not available.