ترغب بنشر مسار تعليمي؟ اضغط هنا

An iterative algorithm for sparse and constrained recovery with applications to divergence-free current reconstructions in magneto-encephalography

176   0   0.0 ( 0 )
 نشر من قبل Ignace Loris
 تاريخ النشر 2012
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose an iterative algorithm for the minimization of a $ell_1$-norm penalized least squares functional, under additional linear constraints. The algorithm is fully explicit: it uses only matrix multiplications with the three matrices present in the problem (in the linear constraint, in the data misfit part and in penalty term of the functional). None of the three matrices must be invertible. Convergence is proven in a finite-dimensional setting. We apply the algorithm to a synthetic problem in magneto-encephalography where it is used for the reconstruction of divergence-free current densities subject to a sparsity promoting penalty on the wavelet coefficients of the current densities. We discuss the effects of imposing zero divergence and of imposing joint sparsity (of the vector components of the current density) on the current density reconstruction.



قيم البحث

اقرأ أيضاً

An explicit algorithm for the minimization of an $ell_1$ penalized least squares functional, with non-separable $ell_1$ term, is proposed. Each step in the iterative algorithm requires four matrix vector multiplications and a single simple projection on a convex set (or equivalently thresholding). Convergence is proven and a 1/N convergence rate is derived for the functional. In the special case where the matrix in the $ell_1$ term is the identity (or orthogonal), the algorithm reduces to the traditional iterative soft-thresholding algorithm. In the special case where the matrix in the quadratic term is the identity (or orthogonal), the algorithm reduces to a gradient projection algorithm for the dual problem. By replacing the projection with a simple proximity operator, other convex non-separable penalties than those based on an $ell_1$-norm can be handled as well.
We recover jump-sparse and sparse signals from blurred incomplete data corrupted by (possibly non-Gaussian) noise using inverse Potts energy functionals. We obtain analytical results (existence of minimizers, complexity) on inverse Potts functionals and provide relations to sparsity problems. We then propose a new optimization method for these functionals which is based on dynamic programming and the alternating direction method of multipliers (ADMM). A series of experiments shows that the proposed method yields very satisfactory jump-sparse and sparse reconstructions, respectively. We highlight the capability of the method by comparing it with classical and recent approaches such as TV minimization (jump-sparse signals), orthogonal matching pursuit, iterative hard thresholding, and iteratively reweighted $ell^1$ minimization (sparse signals).
97 - Philipp Trunschke 2021
We consider the problem of approximating a function in general nonlinear subsets of $L^2$ when only a weighted Monte Carlo estimate of the $L^2$-norm can be computed. Of particular interest in this setting is the concept of sample complexity, the num ber of samples that are necessary to recover the best approximation. Bounds for this quantity have been derived in a previous work and depend primarily on the model class and are not influenced positively by the regularity of the sought function. This result however is only a worst-case bound and is not able to explain the remarkable performance of iterative hard thresholding algorithms that is observed in practice. We reexamine the results of the previous paper and derive a new bound that is able to utilize the regularity of the sought function. A critical analysis of our results allows us to derive a sample efficient algorithm for the model set of low-rank tensors. The viability of this algorithm is demonstrated by recovering quantities of interest for a classical high-dimensional random partial differential equation.
We study sparse recovery with structured random measurement matrices having independent, identically distributed, and uniformly bounded rows and with a nontrivial covariance structure. This class of matrices arises from random sampling of bounded Rie sz systems and generalizes random partial Fourier matrices. Our main result improves the currently available results for the null space and restricted isometry properties of such random matrices. The main novelty of our analysis is a new upper bound for the expectation of the supremum of a Bernoulli process associated with a restricted isometry constant. We apply our result to prove new performance guarantees for the CORSING method, a recently introduced numerical approximation technique for partial differential equations (PDEs) based on compressive sensing.
We present an iterative support shrinking algorithm for $ell_{p}$-$ell_{q}$ minimization~($0 <p < 1 leq q < infty $). This algorithm guarantees the nonexpensiveness of the signal support set and can be easily implemented after being proximally linear ized. The subproblem can be very efficiently solved due to its convexity and reducing size along iteration. We prove that the iterates of the algorithm globally converge to a stationary point of the $ell_{p}$-$ell_{q}$ objective function. In addition, we show a lower bound theory for the iteration sequence, which is more practical than the lower bound results for local minimizers in the literature.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا