Do you want to publish a course? Click here

State Evolution for Approximate Message Passing with Non-Separable Functions

240   0   0.0 ( 0 )
 Added by Andrea Montanari
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Given a high-dimensional data matrix ${boldsymbol A}in{mathbb R}^{mtimes n}$, Approximate Message Passing (AMP) algorithms construct sequences of vectors ${boldsymbol u}^tin{mathbb R}^n$, ${boldsymbol v}^tin{mathbb R}^m$, indexed by $tin{0,1,2dots}$ by iteratively applying ${boldsymbol A}$ or ${boldsymbol A}^{{sf T}}$, and suitable non-linear functions, which depend on the specific application. Special instances of this approach have been developed --among other applications-- for compressed sensing reconstruction, robust regression, Bayesian estimation, low-rank matrix recovery, phase retrieval, and community detection in graphs. For certain classes of random matrices ${boldsymbol A}$, AMP admits an asymptotically exact description in the high-dimensional limit $m,ntoinfty$, which goes under the name of `state evolution. Earlier work established state evolution for separable non-linearities (under certain regularity conditions). Nevertheless, empirical work demonstrated several important applications that require non-separable functions. In this paper we generalize state evolution to Lipschitz continuous non-separable nonlinearities, for Gaussian matrices ${boldsymbol A}$. Our proof makes use of Bolthausens conditioning technique along with several approximation arguments. In particular, we introduce a modified algorithm (called LAMP for Long AMP) which is of independent interest.



rate research

Read More

In sketched clustering, a dataset of $T$ samples is first sketched down to a vector of modest size, from which the centroids are subsequently extracted. Advantages include i) reduced storage complexity and ii) centroid extraction complexity independent of $T$. For the sketching methodology recently proposed by Keriven, et al., which can be interpreted as a random sampling of the empirical characteristic function, we propose a sketched clustering algorithm based on approximate message passing. Numerical experiments suggest that our approach is more efficient than the state-of-the-art sketched clustering algorithm CL-OMPR (in both computational and sample complexity) and more efficient than k-means++ when $T$ is large.
143 - Qiuyun Zou , Haochuan Zhang , 2020
In this paper, we extend the bilinear generalized approximate message passing (BiG-AMP) approach, originally proposed for high-dimensional generalized bilinear regression, to the multi-layer case for the handling of cascaded problem such as matrix-factorization problem arising in relay communication among others. Assuming statistically independent matrix entries with known priors, the new algorithm called ML-BiGAMP could approximate the general sum-product loopy belief propagation (LBP) in the high-dimensional limit enjoying a substantial reduction in computational complexity. We demonstrate that, in large system limit, the asymptotic MSE performance of ML-BiGAMP could be fully characterized via a set of simple one-dimensional equations termed state evolution (SE). We establish that the asymptotic MSE predicted by ML-BiGAMP SE matches perfectly the exact MMSE predicted by the replica method, which is well known to be Bayes-optimal but infeasible in practice. This consistency indicates that the ML-BiGAMP may still retain the same Bayes-optimal performance as the MMSE estimator in high-dimensional applications, although ML-BiGAMPs computational burden is far lower. As an illustrative example of the general ML-BiGAMP, we provide a detector design that could estimate the channel fading and the data symbols jointly with high precision for the two-hop amplify-and-forward relay communication systems.
This paper considers the massive connectivity problem in an asynchronous grant-free random access system, where a huge number of devices sporadically transmit data to a base station (BS) with imperfect synchronization. The goal is to design algorithms for joint user activity detection, delay detection, and channel estimation. By exploiting the sparsity on both user activity and delays, we formulate a hierarchical sparse signal recovery problem in both the single-antenna and the multiple-antenna scenarios. While traditional compressed sensing algorithms can be applied to these problems, they suffer high computational complexity and often require the perfect statistical information of channel and devices. This paper solves these problems by designing the Learned Approximate Message Passing (LAMP) network, which belongs to model-driven deep learning approaches and ensures efficient performance without tremendous training data. Particularly, in the multiple-antenna scenario, we design three different LAMP structures, namely, distributed, centralized and hybrid ones, to balance the performance and complexity. Simulation results demonstrate that the proposed LAMP networks can significantly outperform the conventional AMP method thanks to their ability of parameter learning. It is also shown that LAMP has robust performance to the maximal delay spread of the asynchronous users.
Reconstruction of images from noisy linear measurements is a core problem in image processing, for which convex optimization methods based on total variation (TV) minimization have been the long-standing state-of-the-art. We present an alternative probabilistic reconstruction procedure based on approximate message-passing, Scampi, which operates in the compressive regime, where the inverse imaging problem is underdetermined. While the proposed method is related to the recently proposed GrAMPA algorithm of Borgerding, Schniter, and Rangan, we further develop the probabilistic approach to compressive imaging by introducing an expectation-maximizaiton learning of model parameters, making the Scampi robust to model uncertainties. Additionally, our numerical experiments indicate that Scampi can provide reconstruction performance superior to both GrAMPA as well as convex approaches to TV reconstruction. Finally, through exhaustive best-case experiments, we show that in many cases the maximal performance of both Scampi and convex TV can be quite close, even though the approaches are a prori distinct. The theoretical reasons for this correspondence remain an open question. Nevertheless, the proposed algorithm remains more practical, as it requires far less parameter tuning to perform optimally.
Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique for certain high-dimensional linear systems with non-Gaussian distributions. However, AMP only applies to independent identically distributed (IID) transform matrices, but may become unreliable for other matrix ensembles, especially for ill-conditioned ones. To handle this difficulty, orthogonal/vector AMP (OAMP/VAMP) was proposed for general right-unitarily-invariant matrices. However, the Bayes-optimal OAMP/VAMP requires high-complexity linear minimum mean square error estimator. To solve the disadvantages of AMP and OAMP/VAMP, this paper proposes a memory AMP (MAMP), in which a long-memory matched filter is proposed for interference suppression. The complexity of MAMP is comparable to AMP. The asymptotic Gaussianity of estimation errors in MAMP is guaranteed by the orthogonality principle. A state evolution is derived to asymptotically characterize the performance of MAMP. Based on the state evolution, the relaxation parameters and damping vector in MAMP are optimized. For all right-unitarily-invariant matrices, the optimized MAMP converges to OAMP/VAMP, and thus is Bayes-optimal if it has a unique fixed point. Finally, simulations are provided to verify the validity and accuracy of the theoretical results.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا