ترغب بنشر مسار تعليمي؟ اضغط هنا

A Unified Primal Dual Active Set Algorithm for Nonconvex Sparse Recovery

139   0   0.0 ( 0 )
 نشر من قبل Xiliang Lu
 تاريخ النشر 2013
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations. We develop a novel algorithm of primal-dual active set type for a class of nonconvex sparsity-promoting penalties, including $ell^0$, bridge, smoothly clipped absolute deviation, capped $ell^1$ and minimax concavity penalty. First we establish the existence of a global minimizer for the related optimization problems. Then we derive a novel necessary optimality condition for the global minimizer using the associated thresholding operator. The solutions to the optimality system are coordinate-wise minimizers, and under minor conditions, they are also local minimizers. Upon introducing the dual variable, the active set can be determined using the primal and dual variables together. Further, this relation lends itself to an iterative algorithm of active set type which at each step involves first updating the primal variable only on the active set and then updating the dual variable explicitly. When combined with a continuation strategy on the regularization parameter, the primal dual active set method is shown to converge globally to the underlying regression target under certain regularity conditions. Extensive numerical experiments with both simulated and real data demonstrate its superior performance in efficiency and accuracy compared with the existing sparse recovery methods.



قيم البحث

اقرأ أيضاً

In this paper, we will generate a convex iterative FP thresholding algorithm to solve the problem $(FP^{lambda}_{a})$. Two schemes of convex iterative FP thresholding algorithms are generated. One is convex iterative FP thresholding algorithm-Scheme 1 and the other is convex iterative FP thresholding algorithm-Scheme 2. A global convergence theorem is proved for the convex iterative FP thresholding algorithm-Scheme 1. Under an adaptive rule, the convex iterative FP thresholding algorithm-Scheme 2 will be adaptive both for the choice of the regularized parameter $lambda$ and parameter $a$. These are the advantages for our two schemes of convex iterative FP thresholding algorithm compared with our previous proposed two schemes of iterative FP thresholding algorithm. At last, we provide a series of numerical simulations to test the performance of the convex iterative FP thresholding algorithm-Scheme 2, and the simulation results show that our convex iterative FP thresholding algorithm-Scheme 2 performs very well in recovering a sparse signal.
We propose an extended primal-dual algorithm framework for solving a general nonconvex optimization model. This work is motivated by image reconstruction problems in a class of nonlinear imaging, where the forward operator can be formulated as a nonl inear convex function with respect to the reconstructed image. Using the proposed framework, we put forward six specific iterative schemes, and present their detailed mathematical explanation. We also establish the relationship to existing algorithms. Moreover, under proper assumptions, we analyze the convergence of the schemes for the general model when the optimal dual variable regarding the nonlinear operator is non-vanishing. As a representative, the image reconstruction for spectral computed tomography is used to demonstrate the effectiveness of the proposed algorithm framework. By special properties of the concrete problem, we further prove the convergence of these customized schemes when the optimal dual variable regarding the nonlinear operator is vanishing. Finally, the numerical experiments show that the proposed algorithm has good performance on image reconstruction for various data with non-standard scanning configuration.
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with powerball method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $mathcal{O}(1/sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
Nonlinearly constrained nonconvex and nonsmooth optimization models play an increasingly important role in machine learning, statistics and data analytics. In this paper, based on the augmented Lagrangian function we introduce a flexible first-order primal-dual method, to be called nonconvex auxiliary problem principle of augmented Lagrangian (NAPP-AL), for solving a class of nonlinearly constrained nonconvex and nonsmooth optimization problems. We demonstrate that NAPP-AL converges to a stationary solution at the rate of o(1/sqrt{k}), where k is the number of iterations. Moreover, under an additional error bound condition (to be called VP-EB in the paper), we further show that the convergence rate is in fact linear. Finally, we show that the famous Kurdyka- Lojasiewicz property and the metric subregularity imply the afore-mentioned VP-EB condition.
This paper proposes TriPD, a new primal-dual algorithm for minimizing the sum of a Lipschitz-differentiable convex function and two possibly nonsmooth convex functions, one of which is composed with a linear mapping. We devise a randomized block-coor dinate version of the algorithm which converges under the same stepsize conditions as the full algorithm. It is shown that both the original as well as the block-coordinate scheme feature linear convergence rate when the functions involved are either piecewise linear-quadratic, or when they satisfy a certain quadratic growth condition (which is weaker than strong convexity). Moreover, we apply the developed algorithms to the problem of multi-agent optimization on a graph, thus obtaining novel synchronous and asynchronous distributed methods. The proposed algorithms are fully distributed in the sense that the updates and the stepsizes of each agent only depend on local information. In fact, no prior global coordination is required. Finally, we showcase an application of our algorithm in distributed formation control.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا