ترغب بنشر مسار تعليمي؟ اضغط هنا

Complex Sparse Signal Recovery with Adaptive Laplace Priors

59   0   0.0 ( 0 )
 نشر من قبل Zonglong Bai
 تاريخ النشر 2020
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Because of its self-regularizing nature and uncertainty estimation, the Bayesian approach has achieved excellent recovery performance across a wide range of sparse signal recovery applications. However, most methods are based on the real-value signal model, with the complex-value signal model rarely considered. Typically, the complex signal model is adopted so that phase information can be utilized. Therefore, it is non-trivial to develop Bayesian models for the complex-value signal model. Motivated by the adaptive least absolute shrinkage and selection operator (LASSO) and the sparse Bayesian learning (SBL) framework, a hierarchical model with adaptive Laplace priors is proposed for applications of complex sparse signal recovery in this paper. The proposed hierarchical Bayesian framework is easy to extend for the case of multiple measurement vectors. Moreover, the space alternating principle is integrated into the algorithm to avoid using the matrix inverse operation. In the experimental section of this work, the proposed algorithm is concerned with both complex Gaussian random dictionaries and directions of arrival (DOA) estimations. The experimental results show that the proposed algorithm offers better sparsity recovery performance than the state-of-the-art methods for different types of complex signals.



قيم البحث

اقرأ أيضاً

68 - Tom Tirer , Oded Bialer 2020
Estimating the directions of arrival (DOAs) of multiple sources from a single snapshot obtained by a coherent antenna array is a well-known problem, which can be addressed by sparse signal reconstruction methods, where the DOAs are estimated from the peaks of the recovered high-dimensional signal. In this paper, we consider a more challenging DOA estimation task where the array is composed of non-coherent sub-arrays (i.e., sub-arrays that observe different unknown phase shifts due to using low-cost unsynchronized local oscillators). We formulate this problem as the reconstruction of a joint sparse and low-rank matrix and solve its convex relaxation. While the DOAs can be estimated from the solution of the convex problem, we further show how an improvement is obtained if instead one estimates from this solution the phase shifts, creates phase-corrected observations and applies another final (plain, coherent) sparsity-based DOA estimation. Numerical experiments show that the proposed approach outperforms strategies that are based on non-coherent processing of the sub-arrays as well as other sparsity-based methods.
125 - Jean Daunizeau 2017
So-called sparse estimators arise in the context of model fitting, when one a priori assumes that only a few (unknown) model parameters deviate from zero. Sparsity constraints can be useful when the estimation problem is under-determined, i.e. when n umber of model parameters is much higher than the number of data points. Typically, such constraints are enforced by minimizing the L1 norm, which yields the so-called LASSO estimator. In this work, we propose a simple parameter transform that emulates sparse priors without sacrificing the simplicity and robustness of L2-norm regularization schemes. We show how L1 regularization can be obtained with a sparsify remapping of parameters under normal Bayesian priors, and we demonstrate the ensuing variational Laplace approach using Monte-Carlo simulations.
In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace d istance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.
Compressive sensing relies on the sparse prior imposed on the signal of interest to solve the ill-posed recovery problem in an under-determined linear system. The objective function used to enforce the sparse prior information should be both effectiv e and easily optimizable. Motivated by the entropy concept from information theory, in this paper we propose the generalized Shannon entropy function and R{e}nyi entropy function of the signal as the sparsity promoting regularizers. Both entropy functions are nonconvex, non-separable. Their local minimums only occur on the boundaries of the orthants in the Euclidean space. Compared to other popular objective functions, minimizing the generalized entropy functions adaptively promotes multiple high-energy coefficients while suppressing the rest low-energy coefficients. The corresponding optimization problems can be recasted into a series of reweighted $l_1$-norm minimization problems and then solved efficiently by adapting the FISTA. Sparse signal recovery experiments on both the simulated and real data show the proposed entropy functions minimization approaches perform better than other popular approaches and achieve state-of-the-art performances.
180 - Jinming Wen , Wei Yu 2019
The orthogonal matching pursuit (OMP) algorithm is a commonly used algorithm for recovering $K$-sparse signals $xin mathbb{R}^{n}$ from linear model $y=Ax$, where $Ain mathbb{R}^{mtimes n}$ is a sensing matrix. A fundamental question in the performan ce analysis of OMP is the characterization of the probability that it can exactly recover $x$ for random matrix $A$. Although in many practical applications, in addition to the sparsity, $x$ usually also has some additional property (for example, the nonzero entries of $x$ independently and identically follow the Gaussian distribution), none of existing analysis uses these properties to answer the above question. In this paper, we first show that the prior distribution information of $x$ can be used to provide an upper bound on $|x|_1^2/|x|_2^2$, and then explore the bound to develop a better lower bound on the probability of exact recovery with OMP in $K$ iterations. Simulation tests are presented to illustrate the superiority of the new bound.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا