Do you want to publish a course? Click here

Distributed Picard Iteration: Application to Distributed EM and Distributed PCA

58   0   0.0 ( 0 )
 Added by Mario Figueiredo
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In recent work, we proposed a distributed Picard iteration (DPI) that allows a set of agents, linked by a communication network, to find a fixed point of a locally contractive (LC) map that is the average of individual maps held by said agents. In this work, we build upon the DPI and its local linear convergence (LLC) guarantees to make several contributions. We show that Sangers algorithm for principal component analysis (PCA) corresponds to the iteration of an LC map that can be written as the average of local maps, each map known to each agent holding a subset of the data. Similarly, we show that a variant of the expectation-maximization (EM) algorithm for parameter estimation from noisy and faulty measurements in a sensor network can be written as the iteration of an LC map that is the average of local maps, each available at just one node. Consequently, via the DPI, we derive two distributed algorithms - distributed EM and distributed PCA - whose LLC guarantees follow from those that we proved for the DPI. The verification of the LC condition for EM is challenging, as the underlying operator depends on random samples, thus the LC condition is of probabilistic nature.



rate research

Read More

The Picard iteration is widely used to find fixed points of locally contractive (LC) maps. This paper extends the Picard iteration to distributed settings; specifically, we assume the map of which the fixed point is sought to be the average of individual (not necessarily LC) maps held by a set of agents linked by a sparse communication network. An additional difficulty is that the LC map is not assumed to come from an underlying optimization problem, which prevents exploiting strong global properties such as convexity or Lipschitzianity. Yet, we propose a distributed algorithm and prove its convergence, in fact showing that it maintains the linear rate of the standard Picard iteration for the average LC map. As another contribution, our proof imports tools from perturbation theory of linear operators, which, to the best of our knowledge, had not been used before in the theory of distributed computation.
330 - Lei Wang , Xin Liu , Yin Zhang 2021
As a prominent variant of principal component analysis (PCA), sparse PCA attempts to find sparse loading vectors when conducting dimension reduction. This paper aims to calculate sparse PCA through solving an optimization problem pursuing orthogonality and sparsity simultaneously. We propose a splitting and alternating approach, leading to an efficient distributed algorithm, called DAL1, for solving this nonconvex and nonsmooth optimization problem. Convergence of DAL1 to stationary points has been rigorously established. Computational experiments demonstrate that, due to its fast convergence in terms of iteration count, DAL1 requires far fewer rounds of communications to reach the prescribed accuracy than those required by existing peer methods. Unlike existing algorithms, there is a relatively small possibility of data leakage for DAL1.
We investigate fast and communication-efficient algorithms for the classic problem of minimizing a sum of strongly convex and smooth functions that are distributed among $n$ different nodes, which can communicate using a limited number of bits. Most previous communication-efficient approaches for this problem are limited to first-order optimization, and therefore have emph{linear} dependence on the condition number in their communication complexity. We show that this dependence is not inherent: communication-efficient methods can in fact have sublinear dependence on the condition number. For this, we design and analyze the first communication-efficient distributed variants of preconditioned gradient descent for Generalized Linear Models, and for Newtons method. Our results rely on a new technique for quantizing both the preconditioner and the descent direction at each step of the algorithms, while controlling their convergence rate. We also validate our findings experimentally, showing fast convergence and reduced communication.
Stochastic gradient methods (SGMs) are predominant approaches for solving stochastic optimization. On smooth nonconvex problems, a few acceleration techniques have been applied to improve the convergence rate of SGMs. However, little exploration has been made on applying a certain acceleration technique to a stochastic subgradient method (SsGM) for nonsmooth nonconvex problems. In addition, few efforts have been made to analyze an (accelerated) SsGM with delayed derivatives. The information delay naturally happens in a distributed system, where computing workers do not coordinate with each other. In this paper, we propose an inertial proximal SsGM for solving nonsmooth nonconvex stochastic optimization problems. The proposed method can have guaranteed convergence even with delayed derivative information in a distributed environment. Convergence rate results are established to three classes of nonconvex problems: weakly-convex nonsmooth problems with a convex regularizer, composite nonconvex problems with a nonsmooth convex regularizer, and smooth nonconvex problems. For each problem class, the convergence rate is $O(1/K^{frac{1}{2}})$ in the expected value of the gradient norm square, for $K$ iterations. In a distributed environment, the convergence rate of the proposed method will be slowed down by the information delay. Nevertheless, the slow-down effect will decay with the number of iterations for the latter two problem classes. We test the proposed method on three applications. The numerical results clearly demonstrate the advantages of using the inertial-based acceleration. Furthermore, we observe higher parallelization speed-up in asynchronous updates over the synchronous counterpart, though the former uses delayed derivatives.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا