ترغب بنشر مسار تعليمي؟ اضغط هنا

FedDR -- Randomized Douglas-Rachford Splitting Algorithms for Nonconvex Federated Composite Optimization

96   0   0.0 ( 0 )
 نشر من قبل Quoc Tran-Dinh
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop two new algorithms, called, FedDR and asyncFedDR, for solving a fundamental nonconvex composite optimization problem in federated learning. Our algorithms rely on a novel combination between a nonconvex Douglas-Rachford splitting method, randomized block-coordinate strategies, and asynchronous implementation. They can also handle convex regularizers. Unlike recent methods in the literature, e.g., FedSplit and FedPD, our algorithms update only a subset of users at each communication round, and possibly in an asynchronous manner, making them more practical. These new algorithms also achieve communication efficiency and more importantly can handle statistical and system heterogeneity, which are the two main challenges in federated learning. Our convergence analysis shows that the new algorithms match the communication complexity lower bound up to a constant factor under standard assumptions. Our numerical experiments illustrate the advantages of our methods compared to existing ones on several datasets.

قيم البحث

اقرأ أيضاً

The alternating direction multiplier method (ADMM) is widely used in computer graphics for solving optimization problems that can be nonsmooth and nonconvex. It converges quickly to an approximate solution, but can take a long time to converge to a s olution of high-accuracy. Previously, Anderson acceleration has been applied to ADMM, by treating it as a fixed-point iteration for the concatenation of the dual variables and a subset of the primal variables. In this paper, we note that the equivalence between ADMM and Douglas-Rachford splitting reveals that ADMM is in fact a fixed-point iteration in a lower-dimensional space. By applying Anderson acceleration to such lower-dimensional fixed-point iteration, we obtain a more effective approach for accelerating ADMM. We analyze the convergence of the proposed acceleration method on nonconvex problems, and verify its effectiveness on a variety of computer graphics problems including geometry processing and physical simulation.
The last two decades witnessed the increasing of the interests on the absolute value equations (AVE) of finding $xinmathbb{R}^n$ such that $Ax-|x|-b=0$, where $Ain mathbb{R}^{ntimes n}$ and $bin mathbb{R}^n$. In this paper, we pay our attention on de signing efficient algorithms. To this end, we reformulate AVE to a generalized linear complementarity problem (GLCP), which, among the equivalent forms, is the most economical one in the sense that it does not increase the dimension of the variables. For solving the GLCP, we propose an inexact Douglas-Rachford splitting method which can adopt a relative error tolerance. As a consequence, in the inner iteration processes, we can employ the LSQR method ([C.C. Paige and M.A. Saunders, ACM Trans. Mathe. Softw. (TOMS), 8 (1982), pp. 43--71]) to find a qualified approximate solution for each subproblem, which makes the cost per iteration very low. We prove the convergence of the algorithm and establish its global linear rate of convergence. Comparing results with the popular algorithms such as the exact generalized Newton method [O.L. Mangasarian, Optim. Lett., 1 (2007), pp. 3--8], the inexact semi-smooth Newton method [J.Y.B. Cruz, O.P. Ferreira and L.F. Prudente, Comput. Optim. Appl., 65 (2016), pp. 93--108] and the exact SOR-like method [Y.-F. Ke and C.-F. Ma, Appl. Math. Comput., 311 (2017), pp. 195--202] are reported, which indicate that the proposed algorithm is very promising. Moreover, our method also extends the range of numerically solvable of the AVE; that is, it can deal with not only the case that $|A^{-1}|<1$, the commonly used in those existing literature, but also the case where $|A^{-1}|=1$.
We consider strongly convex-concave minimax problems in the federated setting, where the communication constraint is the main bottleneck. When clients are arbitrarily heterogeneous, a simple Minibatch Mirror-prox achieves the best performance. As the clients become more homogeneous, using multiple local gradient updates at the clients significantly improves upon Minibatch Mirror-prox by communicating less frequently. Our goal is to design an algorithm that can harness the benefit of similarity in the clients while recovering the Minibatch Mirror-prox performance under arbitrary heterogeneity (up to log factors). We give the first federated minimax optimization algorithm that achieves this goal. The main idea is to combine (i) SCAFFOLD (an algorithm that performs variance reduction across clients for convex optimization) to erase the worst-case dependency on heterogeneity and (ii) Catalyst (a framework for acceleration based on modifying the objective) to accelerate convergence without amplifying client drift. We prove that this algorithm achieves our goal, and include experiments to validate the theory.
Douglas-Rachford splitting and its equivalent dual formulation ADMM are widely used iterative methods in composite optimization problems arising in control and machine learning applications. The performance of these algorithms depends on the choice o f step size parameters, for which the optimal values are known in some specific cases, and otherwise are set heuristically. We provide a new unified method of convergence analysis and parameter selection by interpreting the algorithm as a linear dynamical system with nonlinear feedback. This approach allows us to derive a dimensionally independent matrix inequality whose feasibility is sufficient for the algorithm to converge at a specified rate. By analyzing this inequality, we are able to give performance guarantees and parameter settings of the algorithm under a variety of assumptions regarding the convexity and smoothness of the objective function. In particular, our framework enables us to obtain a new and simple proof of the O(1/k) convergence rate of the algorithm when the objective function is not strongly convex.
We consider a general class of nonconvex-PL minimax problems in the cross-device federated learning setting. Although nonconvex-PL minimax problems have received a lot of interest in recent years, existing algorithms do not apply to the cross-device federated learning setting which is substantially different from conventional distributed settings and poses new challenges. To bridge this gap, we propose an algorithmic framework named FedSGDA. FedSGDA performs multiple local update steps on a subset of active clients in each round and leverages global gradient estimates to correct the bias in local update directions. By incorporating FedSGDA with two representative global gradient estimators, we obtain two specific algorithms. We establish convergence rates of the proposed algorithms by using novel potential functions. Experimental results on synthetic and real data corroborate our theory and demonstrate the effectiveness of our algorithms.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا