ﻻ يوجد ملخص باللغة العربية
The Approximate Message Passing (AMP) algorithm efficiently reconstructs signals which have been sampled with large i.i.d. sub-Gaussian sensing matrices. Central to AMP is its state evolution, which guarantees that the difference between the current estimate and ground truth (the aliasing) at every iteration obeys a Gaussian distribution that can be fully characterized by a scalar. However, when Fourier coefficients of a signal with non-uniform spectral density are sampled, such as in Magnetic Resonance Imaging (MRI), the aliasing is intrinsically colored, AMPs scalar state evolution is no longer accurate and the algorithm encounters convergence problems. In response, we propose the Variable Density Approximate Message Passing (VDAMP) algorithm, which uses the wavelet domain to model the colored aliasing. We present empirical evidence that VDAMP obeys a colored state evolution, where the aliasing obeys a Gaussian distribution that can be fully characterized with one scalar per wavelet subband. A benefit of state evolution is that Steins Unbiased Risk Estimate (SURE) can be effectively implemented, yielding an algorithm with subband-dependent thresholding that has no free parameters. We empirically evaluate the effectiveness of VDAMP on three variations of Fast Iterative Shrinkage-Thresholding (FISTA) and find that it converges in around 10 times fewer iterations on average than the next-fastest method, and to a comparable mean-squared-error.
For certain sensing matrices, the Approximate Message Passing (AMP) algorithm efficiently reconstructs undersampled signals. However, in Magnetic Resonance Imaging (MRI), where Fourier coefficients of a natural image are sampled with variable density
We consider a broad class of Approximate Message Passing (AMP) algorithms defined as a Lipschitzian functional iteration in terms of an $ntimes n$ random symmetric matrix $A$. We establish universality in noise for this AMP in the $n$-limit and valid
Compressed sensing (CS) deals with the problem of reconstructing a sparse vector from an under-determined set of observations. Approximate message passing (AMP) is a technique used in CS based on iterative thresholding and inspired by belief propagat
Sparse Bayesian learning (SBL) can be implemented with low complexity based on the approximate message passing (AMP) algorithm. However, it does not work well for a generic measurement matrix, which may cause AMP to diverge. Damped AMP has been used
We consider the problem of estimating a signal from measurements obtained via a generalized linear model. We focus on estimators based on approximate message passing (AMP), a family of iterative algorithms with many appealing features: the performanc