ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantile regression is studied in combination with a penalty which promotes structured (or group) sparsity. A mixed $ell_{1,infty}$-norm on the parameter vector is used to impose structured sparsity on the traditional quantile regression problem. An algorithm is derived to calculate the piece-wise linear solution path of the corresponding minimization problem. A Matlab implementation of the proposed algorithm is provided and some applications of the methods are also studied.
A qualitative comparison of total variation like penalties (total variation, Huber variant of total variation, total generalized variation, ...) is made in the context of global seismic tomography. Both penalized and constrained formulations of seism ic recovery problems are treated. A number of simple iterative recovery algorithms applicable to these problems are described. The convergence speed of these algorithms is compared numerically in this setting. For the constrained formulation a new algorithm is proposed and its convergence is proven.
We propose an iterative algorithm for the minimization of a $ell_1$-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.
An explicit algorithm for the minimization of an $ell_1$ penalized least squares functional, with non-separable $ell_1$ term, is proposed. Each step in the iterative algorithm requires four matrix vector multiplications and a single simple projection on a convex set (or equivalently thresholding). Convergence is proven and a 1/N convergence rate is derived for the functional. In the special case where the matrix in the $ell_1$ term is the identity (or orthogonal), the algorithm reduces to the traditional iterative soft-thresholding algorithm. In the special case where the matrix in the quadratic term is the identity (or orthogonal), the algorithm reduces to a gradient projection algorithm for the dual problem. By replacing the projection with a simple proximity operator, other convex non-separable penalties than those based on an $ell_1$-norm can be handled as well.
124 - I. Loris , H. Douma , G. Nolet 2010
The effects of several nonlinear regularization techniques are discussed in the framework of 3D seismic tomography. Traditional, linear, $ell_2$ penalties are compared to so-called sparsity promoting $ell_1$ and $ell_0$ penalties, and a total variati on penalty. Which of these algorithms is judged optimal depends on the specific requirements of the scientific experiment. If the correct reproduction of model amplitudes is important, classical damping towards a smooth model using an $ell_2$ norm works almost as well as minimizing the total variation but is much more efficient. If gradients (edges of anomalies) should be resolved with a minimum of distortion, we prefer $ell_1$ damping of Daubechies-4 wavelet coefficients. It has the additional advantage of yielding a noiseless reconstruction, contrary to simple $ell_2$ minimization (`Tikhonov regularization) which should be avoided. In some of our examples, the $ell_0$ method produced notable artifacts. In addition we show how nonlinear $ell_1$ methods for finding sparse models can be competitive in speed with the widely used $ell_2$ methods, certainly under noisy conditions, so that there is no need to shun $ell_1$ penalizations.
153 - I. Loris , M. Bertero , C. De Mol 2009
We propose a new gradient projection algorithm that compares favorably with the fastest algorithms available to date for $ell_1$-constrained sparse recovery from noisy data, both in the compressed sensing and inverse problem frameworks. The method ex ploits a line-search along the feasible direction and an adaptive steplength selection based on recent strategies for the alternation of the well-known Barzilai-Borwein rules. The convergence of the proposed approach is discussed and a computational study on both well-conditioned and ill-conditioned problems is carried out for performance evaluations in comparison with five other algorithms proposed in the literature.
179 - Ignace Loris 2009
The problem of assessing the performance of algorithms used for the minimization of an $ell_1$-penalized least-squares functional, for a range of penalty parameters, is investigated. A criterion that uses the idea of `approximation isochrones is intr oduced. Five different iterative minimization algorithms are tested and compared, as well as two warm-start strategies. Both well-conditioned and ill-conditioned problems are used in the comparison, and the contrast between these two categories is highlighted.
Regularization of ill-posed linear inverse problems via $ell_1$ penalization has been proposed for cases where the solution is known to be (almost) sparse. One way to obtain the minimizer of such an $ell_1$ penalized functional is via an iterative so ft-thresholding algorithm. We propose an alternative implementation to $ell_1$-constraints, using a gradient method, with projection on $ell_1$-balls. The corresponding algorithm uses again iterative soft-thresholding, now with a variable thresholding parameter. We also propose accelerat
77 - Ignace Loris 2008
L1Packv2 is a Mathematica package that contains a number of algorithms that can be used for the minimization of an $ell_1$-penalized least squares functional. The algorithms can handle a mix of penalized and unpenalized variables. Several instructive examples are given. Also, an implementation that yields an exact output whenever exact data are given is provided.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا