ترغب بنشر مسار تعليمي؟ اضغط هنا

Quantile regression is studied in combination with a penalty which promotes structured (or group) sparsity. A mixed $ell_{1,infty}$-norm on the parameter vector is used to impose structured sparsity on the traditional quantile regression problem. An algorithm is derived to calculate the piece-wise linear solution path of the corresponding minimization problem. A Matlab implementation of the proposed algorithm is provided and some applications of the methods are also studied.
A qualitative comparison of total variation like penalties (total variation, Huber variant of total variation, total generalized variation, ...) is made in the context of global seismic tomography. Both penalized and constrained formulations of seism ic recovery problems are treated. A number of simple iterative recovery algorithms applicable to these problems are described. The convergence speed of these algorithms is compared numerically in this setting. For the constrained formulation a new algorithm is proposed and its convergence is proven.
We propose an iterative algorithm for the minimization of a $ell_1$-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.
An explicit algorithm for the minimization of an $ell_1$ penalized least squares functional, with non-separable $ell_1$ term, is proposed. Each step in the iterative algorithm requires four matrix vector multiplications and a single simple projection on a convex set (or equivalently thresholding). Convergence is proven and a 1/N convergence rate is derived for the functional. In the special case where the matrix in the $ell_1$ term is the identity (or orthogonal), the algorithm reduces to the traditional iterative soft-thresholding algorithm. In the special case where the matrix in the quadratic term is the identity (or orthogonal), the algorithm reduces to a gradient projection algorithm for the dual problem. By replacing the projection with a simple proximity operator, other convex non-separable penalties than those based on an $ell_1$-norm can be handled as well.
179 - Ignace Loris 2009
The problem of assessing the performance of algorithms used for the minimization of an $ell_1$-penalized least-squares functional, for a range of penalty parameters, is investigated. A criterion that uses the idea of `approximation isochrones is intr oduced. Five different iterative minimization algorithms are tested and compared, as well as two warm-start strategies. Both well-conditioned and ill-conditioned problems are used in the comparison, and the contrast between these two categories is highlighted.
77 - Ignace Loris 2008
L1Packv2 is a Mathematica package that contains a number of algorithms that can be used for the minimization of an $ell_1$-penalized least squares functional. The algorithms can handle a mix of penalized and unpenalized variables. Several instructive examples are given. Also, an implementation that yields an exact output whenever exact data are given is provided.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا