Do you want to publish a course? Click here

Nonconvex fraction function recovery sparse signal by convex optimization algorithm

234   0   0.0 ( 0 )
 Added by Angang Cui
 Publication date 2019
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we will generate a convex iterative FP thresholding algorithm to solve the problem $(FP^{lambda}_{a})$. Two schemes of convex iterative FP thresholding algorithms are generated. One is convex iterative FP thresholding algorithm-Scheme 1 and the other is convex iterative FP thresholding algorithm-Scheme 2. A global convergence theorem is proved for the convex iterative FP thresholding algorithm-Scheme 1. Under an adaptive rule, the convex iterative FP thresholding algorithm-Scheme 2 will be adaptive both for the choice of the regularized parameter $lambda$ and parameter $a$. These are the advantages for our two schemes of convex iterative FP thresholding algorithm compared with our previous proposed two schemes of iterative FP thresholding algorithm. At last, we provide a series of numerical simulations to test the performance of the convex iterative FP thresholding algorithm-Scheme 2, and the simulation results show that our convex iterative FP thresholding algorithm-Scheme 2 performs very well in recovering a sparse signal.



rate research

Read More

In this paper, we consider the problem of recovering a sparse signal based on penalized least squares formulations. We develop a novel algorithm of primal-dual active set type for a class of nonconvex sparsity-promoting penalties, including $ell^0$, bridge, smoothly clipped absolute deviation, capped $ell^1$ and minimax concavity penalty. First we establish the existence of a global minimizer for the related optimization problems. Then we derive a novel necessary optimality condition for the global minimizer using the associated thresholding operator. The solutions to the optimality system are coordinate-wise minimizers, and under minor conditions, they are also local minimizers. Upon introducing the dual variable, the active set can be determined using the primal and dual variables together. Further, this relation lends itself to an iterative algorithm of active set type which at each step involves first updating the primal variable only on the active set and then updating the dual variable explicitly. When combined with a continuation strategy on the regularization parameter, the primal dual active set method is shown to converge globally to the underlying regression target under certain regularity conditions. Extensive numerical experiments with both simulated and real data demonstrate its superior performance in efficiency and accuracy compared with the existing sparse recovery methods.
We study the problem of reconstructing a block-sparse signal from compressively sampled measurements. In certain applications, in addition to the inherent block-sparse structure of the signal, some prior information about the block support, i.e. blocks containing non-zero elements, might be available. Although many block-sparse recovery algorithms have been investigated in Bayesian framework, it is still unclear how to incorporate the information about the probability of occurrence into regularization-based block-sparse recovery in an optimal sense. In this work, we bridge between these fields by the aid of a new concept in conic integral geometry. Specifically, we solve a weighted optimization problem when the prior distribution about the block support is available. Moreover, we obtain the unique weights that minimize the expected required number of measurements. Our simulations on both synthetic and real data confirm that these weights considerably decrease the required sample complexity.
We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings. As an application of the general scheme we propose nearly optimal methods for minimizing smooth functions with Lipschitz derivatives of an arbitrary order, as well as for smooth minimax optimization problems. The proposed meta-algorithm is more general than the ones in the literature and allows to obtain better convergence rates and practical performance in several settings.
Stochastic gradient descent (SGD) is one of the most widely used optimization methods for parallel and distributed processing of large datasets. One of the key limitations of distributed SGD is the need to regularly communicate the gradients between different computation nodes. To reduce this communication bottleneck, recent work has considered a one-bit variant of SGD, where only the sign of each gradient element is used in optimization. In this paper, we extend this idea by proposing a stochastic variant of the proximal-gradient method that also uses one-bit per update element. We prove the theoretical convergence of the method for non-convex optimization under a set of explicit assumptions. Our results indicate that the compressed method can match the convergence rate of the uncompressed one, making the proposed method potentially appealing for distributed processing of large datasets.
This paper investigates how to accelerate the convergence of distributed optimization algorithms on nonconvex problems with zeroth-order information available only. We propose a zeroth-order (ZO) distributed primal-dual stochastic coordinates algorithm equipped with powerball method to accelerate. We prove that the proposed algorithm has a convergence rate of $mathcal{O}(sqrt{p}/sqrt{nT})$ for general nonconvex cost functions. We consider solving the generation of adversarial examples from black-box DNNs problem to compare with the existing state-of-the-art centralized and distributed ZO algorithms. The numerical results demonstrate the faster convergence rate of the proposed algorithm and match the theoretical analysis.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا