ترغب بنشر مسار تعليمي؟ اضغط هنا

Exact Recovery Conditions for Sparse Representations with Partial Support Information

293   0   0.0 ( 0 )
 نشر من قبل Cedric Herzet
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We address the exact recovery of a k-sparse vector in the noiseless setting when some partial information on the support is available. This partial information takes the form of either a subset of the true support or an approximate subset including wrong atoms as well. We derive a new sufficient and worst-case necessary (in some sense) condition for the success of some procedures based on lp-relaxation, Orthogonal Matching Pursuit (OMP) and Orthogonal Least Squares (OLS). Our result is based on the coherence mu of the dictionary and relaxes the well-known condition mu<1/(2k-1) ensuring the recovery of any k-sparse vector in the non-informed setup. It reads mu<1/(2k-g+b-1) when the informed support is composed of g good atoms and b wrong atoms. We emphasize that our condition is complementary to some restricted-isometry based conditions by showing that none of them implies the other. Because this mutual coherence condition is common to all procedures, we carry out a finer analysis based on the Null Space Property (NSP) and the Exact Recovery Condition (ERC). Connections are established regarding the characterization of lp-relaxation procedures and OMP in the informed setup. First, we emphasize that the truncated NSP enjoys an ordering property when p is decreased. Second, the partial ERC for OMP (ERC-OMP) implies in turn the truncated NSP for the informed l1 problem, and the truncated NSP for p<1.



قيم البحث

اقرأ أيضاً

180 - Jinming Wen , Wei Yu 2019
The orthogonal matching pursuit (OMP) algorithm is a commonly used algorithm for recovering $K$-sparse signals $xin mathbb{R}^{n}$ from linear model $y=Ax$, where $Ain mathbb{R}^{mtimes n}$ is a sensing matrix. A fundamental question in the performan ce analysis of OMP is the characterization of the probability that it can exactly recover $x$ for random matrix $A$. Although in many practical applications, in addition to the sparsity, $x$ usually also has some additional property (for example, the nonzero entries of $x$ independently and identically follow the Gaussian distribution), none of existing analysis uses these properties to answer the above question. In this paper, we first show that the prior distribution information of $x$ can be used to provide an upper bound on $|x|_1^2/|x|_2^2$, and then explore the bound to develop a better lower bound on the probability of exact recovery with OMP in $K$ iterations. Simulation tests are presented to illustrate the superiority of the new bound.
The problem of estimating a sparse signal from low dimensional noisy observations arises in many applications, including super resolution, signal deconvolution, and radar imaging. In this paper, we consider a sparse signal model with non-stationary m odulations, in which each dictionary atom contributing to the observations undergoes an unknown, distinct modulation. By applying the lifting technique, under the assumption that the modulating signals live in a common subspace, we recast this sparse recovery and non-stationary blind demodulation problem as the recovery of a column-wise sparse matrix from structured linear observations, and propose to solve it via block $ell_{1}$-norm regularized quadratic minimization. Due to observation noise, the sparse signal and modulation process cannot be recovered exactly. Instead, we aim to recover the sparse support of the ground truth signal and bound the recovery errors of the signals non-zero components and the modulation process. In particular, we derive sufficient conditions on the sample complexity and regularization parameter for exact support recovery and bound the recovery error on the support. Numerical simulations verify and support our theoretical findings, and we demonstrate the effectiveness of our model in the application of single molecule imaging.
176 - Rong Fan , Qun Wan , Yipeng Liu 2012
In this paper, we present new results on using orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries for complex cases (i.e., complex measurement vector, complex dictionary and complex additive white Gaussian noise (CAWGN)). A sufficient condition that OMP can recover the optimal representation of an exactly sparse signal in the complex cases is proposed both in noiseless and bound Gaussian noise settings. Similar to exact recovery condition (ERC) results in real cases, we extend them to complex case and derivate the corresponding ERC in the paper. It leverages this theory to show that OMP succeed for k-sparse signal from a class of complex dictionary. Besides, an application with geometrical theory of diffraction (GTD) model is presented for complex cases. Finally, simulation experiments illustrate the validity of the theoretical analysis.
109 - Jinming Wen , Rui Zhang , 2020
Exact recovery of $K$-sparse signals $x in mathbb{R}^{n}$ from linear measurements $y=Ax$, where $Ain mathbb{R}^{mtimes n}$ is a sensing matrix, arises from many applications. The orthogonal matching pursuit (OMP) algorithm is widely used for reconst ructing $x$. A fundamental question in the performance analysis of OMP is the characterizations of the probability of exact recovery of $x$ for random matrix $A$ and the minimal $m$ to guarantee a target recovery performance. In many practical applications, in addition to sparsity, $x$ also has some additional properties. This paper shows that these properties can be used to refine the answer to the above question. In this paper, we first show that the prior information of the nonzero entries of $x$ can be used to provide an upper bound on $|x|_1^2/|x|_2^2$. Then, we use this upper bound to develop a lower bound on the probability of exact recovery of $x$ using OMP in $K$ iterations. Furthermore, we develop a lower bound on the number of measurements $m$ to guarantee that the exact recovery probability using $K$ iterations of OMP is no smaller than a given target probability. Finally, we show that when $K=O(sqrt{ln n})$, as both $n$ and $K$ go to infinity, for any $0<zetaleq 1/sqrt{pi}$, $m=2Kln (n/zeta)$ measurements are sufficient to ensure that the probability of exact recovering any $K$-sparse $x$ is no lower than $1-zeta$ with $K$ iterations of OMP. For $K$-sparse $alpha$-strongly decaying signals and for $K$-sparse $x$ whose nonzero entries independently and identically follow the Gaussian distribution, the number of measurements sufficient for exact recovery with probability no lower than $1-zeta$ reduces further to $m=(sqrt{K}+4sqrt{frac{alpha+1}{alpha-1}ln(n/zeta)})^2$ and asymptotically $mapprox 1.9Kln (n/zeta)$, respectively.
In this work, we consider the problem of recovering analysis-sparse signals from under-sampled measurements when some prior information about the support is available. We incorporate such information in the recovery stage by suitably tuning the weigh ts in a weighted $ell_1$ analysis optimization problem. Indeed, we try to set the weights such that the method succeeds with minimum number of measurements. For this purpose, we exploit the upper-bound on the statistical dimension of a certain cone to determine the weights. Our numerical simulations confirm that the introduced method with tuned weights outperforms the standard $ell_1$ analysis technique.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا