Similarity-aware Non-Convex Federated Optimization
Abstract
Different federated optimization methods typically employ distinct client-selection strategies: some communicate only with a randomly sampled subset of clients at each round, some predefine a fixed set of clients to contact, and others use a hybrid scheme that combines both. Existing metrics for comparing optimization methods often assume equal communication costs across all strategies, which is rarely the case in practice. In this work, we address the setting where client-selection strategies incur different costs. We first describe this problem and introduce a simple model that quantifies communication and local computation complexities. This new model allows for three commonly used client-selection strategies and explicitly associates each with a distinct cost. Within this setting, we propose a new algorithm that achieves the best-known communication and local complexity among existing methods for non-convex optimization. This algorithm is based on the inexact composite gradient method with gradient estimators constructed using recursive gradient and SAGA. Furthermore, it serves as a framework that can incorporate general unbiased gradient estimators, such as SVRG.