No Arabic abstract
In this paper, we consider a class of nonconvex problems with linear constraints appearing frequently in the area of image processing. We solve this problem by the penalty method and propose the iteratively reweighted alternating minimization algorithm. To speed up the algorithm, we also apply the continuation strategy to the penalty parameter. A convergence result is proved for the algorithm. Compared with the nonconvex ADMM, the proposed algorithm enjoys both theoretical and computational advantages like weaker convergence requirements and faster speed. Numerical results demonstrate the efficiency of the proposed algorithm.
Nonsmooth sparsity constrained optimization captures a broad spectrum of applications in machine learning and computer vision. However, this problem is NP-hard in general. Existing solutions to this problem suffer from one or more of the following limitations: they fail to solve general nonsmooth problems; they lack convergence analysis; they lead to weaker optimality conditions. This paper revisits the Penalty Alternating Direction Method (PADM) for nonsmooth sparsity constrained optimization problems. We consider two variants of the PADM, i.e., PADM based on Iterative Hard Thresholding (PADM-IHT) and PADM based on Block Coordinate Decomposition (PADM-BCD). We show that the PADM-BCD algorithm finds stronger stationary points of the optimization problem than previous methods. We also develop novel theories to analyze the convergence rate for both the PADM-IHT and the PADM-BCD algorithms. Our theoretical bounds can exploit the inherent sparsity of the optimization problem. Finally, numerical results demonstrate the superiority of PADM-BCD to existing sparse optimization algorithms. Keywords: Sparsity Recovery, Nonsmooth Optimization, Non-Convex Optimization, Block Coordinate Decomposition, Iterative Hard Thresholding, Convergence Analysis
In this paper, we consider a class of nonsmooth nonconvex optimization problems whose objective is the sum of a block relative smooth function and a proper and lower semicontinuous block separable function. Although the analysis of block proximal gradient (BPG) methods for the class of block $L$-smooth functions have been successfully extended to Bregman BPG methods that deal with the class of block relative smooth functions, accelerated Bregman BPG methods are scarce and challenging to design. Taking our inspiration from Nesterov-type acceleration and the majorization-minimization scheme, we propose a block alternating Bregman Majorization-Minimization framework with Extrapolation (BMME). We prove subsequential convergence of BMME to a first-order stationary point under mild assumptions, and study its global convergence under stronger conditions. We illustrate the effectiveness of BMME on the penalized orthogonal nonnegative matrix factorization problem.
The paper presents a fully coupled TV-Stokes model, and propose an algorithm based on alternating minimization of the objective functional whose first iteration is exactly the modified TV-Stokes model proposed earlier. The model is a generalization of the second order Total Generalized Variation model. A convergence analysis is given.
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications. This paper proposes a unit-modulus wireless FL (UMWFL) framework, which simultaneously uploads local model parameters and computes global model parameters via optimized phase shifting. The proposed framework avoids sophisticated baseband signal processing, leading to both low communication delays and implementation costs. A training loss bound is derived and a penalty alternating minimization (PAM) algorithm is proposed to minimize the nonconvex nonsmooth loss bound. Experimental results in the Car Learning to Act (CARLA) platform show that the proposed UMWFL framework with PAM algorithm achieves smaller training losses and testing errors than those of the benchmark scheme.
We present a novel algorithm that allows us to gain detailed insight into the effects of sparsity in linear and nonlinear optimization, which is of great importance in many scientific areas such as image and signal processing, medical imaging, compressed sensing, and machine learning (e.g., for the training of neural networks). Sparsity is an important feature to ensure robustness against noisy data, but also to find models that are interpretable and easy to analyze due to the small number of relevant terms. It is common practice to enforce sparsity by adding the $ell_1$-norm as a weighted penalty term. In order to gain a better understanding and to allow for an informed model selection, we directly solve the corresponding multiobjective optimization problem (MOP) that arises when we minimize the main objective and the $ell_1$-norm simultaneously. As this MOP is in general non-convex for nonlinear objectives, the weighting method will fail to provide all optimal compromises. To avoid this issue, we present a continuation method which is specifically tailored to MOPs with two objective functions one of which is the $ell_1$-norm. Our method can be seen as a generalization of well-known homotopy methods for linear regression problems to the nonlinear case. Several numerical examples - including neural network training - demonstrate our theoretical findings and the additional insight that can be gained by this multiobjective approach.