ﻻ يوجد ملخص باللغة العربية
We present an iterative support shrinking algorithm for $ell_{p}$-$ell_{q}$ minimization~($0 <p < 1 leq q < infty $). This algorithm guarantees the nonexpensiveness of the signal support set and can be easily implemented after being proximally linearized. The subproblem can be very efficiently solved due to its convexity and reducing size along iteration. We prove that the iterates of the algorithm globally converge to a stationary point of the $ell_{p}$-$ell_{q}$ objective function. In addition, we show a lower bound theory for the iteration sequence, which is more practical than the lower bound results for local minimizers in the literature.
In this paper we propose a primal-dual homotopy method for $ell_1$-minimization problems with infinity norm constraints in the context of sparse reconstruction. The natural homotopy parameter is the value of the bound for the constraints and we show
The Chebyshev or $ell_{infty}$ estimator is an unconventional alternative to the ordinary least squares in solving linear regressions. It is defined as the minimizer of the $ell_{infty}$ objective function begin{align*} hat{boldsymbol{beta}} := arg
The task of predicting missing entries of a matrix, from a subset of known entries, is known as textit{matrix completion}. In todays data-driven world, data completion is essential whether it is the main goal or a pre-processing step. Structured matr
Faraday tomography offers crucial information on the magnetized astronomical objects, such as quasars, galaxies, or galaxy clusters, by observing its magnetoionic media. The observed linear polarization spectrum is inverse Fourier transformed to obta
We propose an iterative algorithm for the minimization of a $ell_1$-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in