ﻻ يوجد ملخص باللغة العربية
Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization, $ell_1$ norm regularized optimization, and $ell_0$ norm regularized optimization as special cases. This paper proposes and analyzes a new Generalized Matrix Splitting Algorithm (GMSA) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Our algorithm is derived from a novel triangle operator mapping, which can be computed exactly using a new generalized Gaussian elimination procedure. We establish the global convergence, convergence rate, and iteration complexity of GMSA for convex problems. In addition, we also discuss several important extensions of GMSA. Finally, we validate the performance of our proposed method on three particular applications: nonnegative matrix factorization, $ell_0$ norm regularized sparse coding, and $ell_1$ norm regularized Dantzig selector problem. Extensive experiments show that our method achieves state-of-the-art performance in term of both efficiency and efficacy.
Fourier domain structured low-rank matrix priors are emerging as powerful alternatives to traditional image recovery methods such as total variation and wavelet regularization. These priors specify that a convolutional structured matrix, i.e., Toepli
We consider the problem of approximate joint triangularization of a set of noisy jointly diagonalizable real matrices. Approximate joint triangularizers are commonly used in the estimation of the joint eigenstructure of a set of matrices, with applic
In [5], Srijuntongsiri and Vavasis propose the Kantorovich-Test Subdivision algorithm, or KTS, which is an algorithm for finding all zeros of a polynomial system in a bounded region of the plane. This algorithm can be used to find the intersections b
In recent studies on sparse modeling, $l_q$ ($0<q<1$) regularized least squares regression ($l_q$LS) has received considerable attention due to its superiorities on sparsity-inducing and bias-reduction over the convex counterparts. In this paper, we
In this paper, we present several descent methods that can be applied to nonnegative matrix factorization and we analyze a recently developped fast block coordinate method called Rank-one Residue Iteration (RRI). We also give a comparison of these di