ﻻ يوجد ملخص باللغة العربية
In this paper, we will generate a convex iterative FP thresholding algorithm to solve the problem $(FP^{lambda}_{a})$. Two schemes of convex iterative FP thresholding algorithms are generated. One is convex iterative FP thresholding algorithm-Scheme 1 and the other is convex iterative FP thresholding algorithm-Scheme 2. A global convergence theorem is proved for the convex iterative FP thresholding algorithm-Scheme 1. Under an adaptive rule, the convex iterative FP thresholding algorithm-Scheme 2 will be adaptive both for the choice of the regularized parameter $lambda$ and parameter $a$. These are the advantages for our two schemes of convex iterative FP thresholding algorithm compared with our previous proposed two schemes of iterative FP thresholding algorithm. At last, we provide a series of numerical simulations to test the performance of the convex iterative FP thresholding algorithm-Scheme 2, and the simulation results show that our convex iterative FP thresholding algorithm-Scheme 2 performs very well in recovering a sparse signal.
In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations. We develop a novel algorithm of primal-dual active set type for a class of nonconvex sparsity-promoting penalties, including $ell^0$,
We study the problem of reconstructing a block-sparse signal from compressively sampled measurements. In certain applications, in addition to the inherent block-sparse structure of the signal, some prior information about the block support, i.e. bloc
We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings. As an application of the general scheme we propose nearly optimal methods for minimizing smooth function
Stochastic gradient descent (SGD) is one of the most widely used optimization methods for parallel and distributed processing of large datasets. One of the key limitations of distributed SGD is the need to regularly communicate the gradients between
This paper investigates how to accelerate the convergence of distributed optimization algorithms on nonconvex problems with zeroth-order information available only. We propose a zeroth-order (ZO) distributed primal-dual stochastic coordinates algorit