ترغب بنشر مسار تعليمي؟ اضغط هنا

Simultaneous Sparse Recovery and Blind Demodulation

89   0   0.0 ( 0 )
 نشر من قبل Youye Xie
 تاريخ النشر 2019
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The task of finding a sparse signal decomposition in an overcomplete dictionary is made more complicated when the signal undergoes an unknown modulation (or convolution in the complementary Fourier domain). Such simultaneous sparse recovery and blind demodulation problems appear in many applications including medical imaging, super resolution, self-calibration, etc. In this paper, we consider a more general sparse recovery and blind demodulation problem in which each atom comprising the signal undergoes a distinct modulation process. Under the assumption that the modulating waveforms live in a known common subspace, we employ the lifting technique and recast this problem as the recovery of a column-wise sparse matrix from structured linear measurements. In this framework, we accomplish sparse recovery and blind demodulation simultaneously by minimizing the induced atomic norm, which in this problem corresponds to the block $ell_1$ norm minimization. For perfect recovery in the noiseless case, we derive near optimal sample complexity bounds for Gaussian and random Fourier overcomplete dictionaries. We also provide bounds on recovering the column-wise sparse matrix in the noisy case. Numerical simulations illustrate and support our theoretical results.



قيم البحث

اقرأ أيضاً

In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace d istance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.
Compressive sensing relies on the sparse prior imposed on the signal of interest to solve the ill-posed recovery problem in an under-determined linear system. The objective function used to enforce the sparse prior information should be both effectiv e and easily optimizable. Motivated by the entropy concept from information theory, in this paper we propose the generalized Shannon entropy function and R{e}nyi entropy function of the signal as the sparsity promoting regularizers. Both entropy functions are nonconvex, non-separable. Their local minimums only occur on the boundaries of the orthants in the Euclidean space. Compared to other popular objective functions, minimizing the generalized entropy functions adaptively promotes multiple high-energy coefficients while suppressing the rest low-energy coefficients. The corresponding optimization problems can be recasted into a series of reweighted $l_1$-norm minimization problems and then solved efficiently by adapting the FISTA. Sparse signal recovery experiments on both the simulated and real data show the proposed entropy functions minimization approaches perform better than other popular approaches and achieve state-of-the-art performances.
We study the problem of reconstructing a block-sparse signal from compressively sampled measurements. In certain applications, in addition to the inherent block-sparse structure of the signal, some prior information about the block support, i.e. bloc ks containing non-zero elements, might be available. Although many block-sparse recovery algorithms have been investigated in Bayesian framework, it is still unclear how to incorporate the information about the probability of occurrence into regularization-based block-sparse recovery in an optimal sense. In this work, we bridge between these fields by the aid of a new concept in conic integral geometry. Specifically, we solve a weighted optimization problem when the prior distribution about the block support is available. Moreover, we obtain the unique weights that minimize the expected required number of measurements. Our simulations on both synthetic and real data confirm that these weights considerably decrease the required sample complexity.
We propose and analyze a solution to the problem of recovering a block sparse signal with sparse blocks from linear measurements. Such problems naturally emerge inter alia in the context of mobile communication, in order to meet the scalability and l ow complexity requirements of massive antenna systems and massive machine-type communication. We introduce a new variant of the Hard Thresholding Pursuit (HTP) algorithm referred to as HiHTP. We provide both a proof of convergence and a recovery guarantee for noisy Gaussian measurements that exhibit an improved asymptotic scaling in terms of the sampling complexity in comparison with the usual HTP algorithm. Furthermore, hierarchically sparse signals and Kronecker product structured measurements naturally arise together in a variety of applications. We establish the efficient reconstruction of hierarchically sparse signals from Kronecker product measurements using the HiHTP algorithm. Additionally, we provide analytical results that connect our recovery conditions to generalized coherence measures. Again, our recovery results exhibit substantial improvement in the asymptotic sampling complexity scaling over the standard setting. Finally, we validate in numerical experiments that for hierarchically sparse signals, HiHTP performs significantly better compared to HTP.
We study the problem of recursively recovering a time sequence of sparse vectors, St, from measurements Mt := St + Lt that are corrupted by structured noise Lt which is dense and can have large magnitude. The structure that we require is that Lt shou ld lie in a low dimensional subspace that is either fixed or changes slowly enough; and the eigenvalues of its covariance matrix are clustered. We do not assume any model on the sequence of sparse vectors. Their support sets and their nonzero element values may be either independent or correlated over time (usually in many applications they are correlated). The only thing required is that there be some support change every so often. We introduce a novel solution approach called Recursive Projected Compressive Sensing with cluster-PCA (ReProCS-cPCA) that addresses some of the limitations of earlier work. Under mild assumptions, we show that, with high probability, ReProCS-cPCA can exactly recover the support set of St at all times; and the reconstruction errors of both St and Lt are upper bounded by a time-invariant and small value.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا