No Arabic abstract
In this paper, we present new results on using orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries for complex cases (i.e., complex measurement vector, complex dictionary and complex additive white Gaussian noise (CAWGN)). A sufficient condition that OMP can recover the optimal representation of an exactly sparse signal in the complex cases is proposed both in noiseless and bound Gaussian noise settings. Similar to exact recovery condition (ERC) results in real cases, we extend them to complex case and derivate the corresponding ERC in the paper. It leverages this theory to show that OMP succeed for k-sparse signal from a class of complex dictionary. Besides, an application with geometrical theory of diffraction (GTD) model is presented for complex cases. Finally, simulation experiments illustrate the validity of the theoretical analysis.
The orthogonal matching pursuit (OMP) algorithm is a commonly used algorithm for recovering $K$-sparse signals $xin mathbb{R}^{n}$ from linear model $y=Ax$, where $Ain mathbb{R}^{mtimes n}$ is a sensing matrix. A fundamental question in the performance analysis of OMP is the characterization of the probability that it can exactly recover $x$ for random matrix $A$. Although in many practical applications, in addition to the sparsity, $x$ usually also has some additional property (for example, the nonzero entries of $x$ independently and identically follow the Gaussian distribution), none of existing analysis uses these properties to answer the above question. In this paper, we first show that the prior distribution information of $x$ can be used to provide an upper bound on $|x|_1^2/|x|_2^2$, and then explore the bound to develop a better lower bound on the probability of exact recovery with OMP in $K$ iterations. Simulation tests are presented to illustrate the superiority of the new bound.
Exact recovery of $K$-sparse signals $x in mathbb{R}^{n}$ from linear measurements $y=Ax$, where $Ain mathbb{R}^{mtimes n}$ is a sensing matrix, arises from many applications. The orthogonal matching pursuit (OMP) algorithm is widely used for reconstructing $x$. A fundamental question in the performance analysis of OMP is the characterizations of the probability of exact recovery of $x$ for random matrix $A$ and the minimal $m$ to guarantee a target recovery performance. In many practical applications, in addition to sparsity, $x$ also has some additional properties. This paper shows that these properties can be used to refine the answer to the above question. In this paper, we first show that the prior information of the nonzero entries of $x$ can be used to provide an upper bound on $|x|_1^2/|x|_2^2$. Then, we use this upper bound to develop a lower bound on the probability of exact recovery of $x$ using OMP in $K$ iterations. Furthermore, we develop a lower bound on the number of measurements $m$ to guarantee that the exact recovery probability using $K$ iterations of OMP is no smaller than a given target probability. Finally, we show that when $K=O(sqrt{ln n})$, as both $n$ and $K$ go to infinity, for any $0<zetaleq 1/sqrt{pi}$, $m=2Kln (n/zeta)$ measurements are sufficient to ensure that the probability of exact recovering any $K$-sparse $x$ is no lower than $1-zeta$ with $K$ iterations of OMP. For $K$-sparse $alpha$-strongly decaying signals and for $K$-sparse $x$ whose nonzero entries independently and identically follow the Gaussian distribution, the number of measurements sufficient for exact recovery with probability no lower than $1-zeta$ reduces further to $m=(sqrt{K}+4sqrt{frac{alpha+1}{alpha-1}ln(n/zeta)})^2$ and asymptotically $mapprox 1.9Kln (n/zeta)$, respectively.
Greed is good. However, the tighter you squeeze, the less you have. In this paper, a less greedy algorithm for sparse signal reconstruction in compressive sensing, named orthogonal matching pursuit with thresholding is studied. Using the global 2-coherence , which provides a bridge between the well known mutual coherence and the restricted isometry constant, the performance of orthogonal matching pursuit with thresholding is analyzed and more general results for sparse signal reconstruction are obtained. It is also shown that given the same assumption on the coherence index and the restricted isometry constant as required for orthogonal matching pursuit, the thresholding variation gives exactly the same reconstruction performance with significantly less complexity.
Recovery algorithms play a key role in compressive sampling (CS). Most of current CS recovery algorithms are originally designed for one-dimensional (1D) signal, while many practical signals are two-dimensional (2D). By utilizing 2D separable sampling, 2D signal recovery problem can be converted into 1D signal recovery problem so that ordinary 1D recovery algorithms, e.g. orthogonal matching pursuit (OMP), can be applied directly. However, even with 2D separable sampling, the memory usage and complexity at the decoder is still high. This paper develops a novel recovery algorithm called 2D-OMP, which is an extension of 1D-OMP. In the 2D-OMP, each atom in the dictionary is a matrix. At each iteration, the decoder projects the sample matrix onto 2D atoms to select the best matched atom, and then renews the weights for all the already selected atoms via the least squares. We show that 2D-OMP is in fact equivalent to 1D-OMP, but it reduces recovery complexity and memory usage significantly. Whats more important, by utilizing the same methodology used in this paper, one can even obtain higher dimensional OMP (say 3D-OMP, etc.) with ease.
In this paper, we put forth a new joint sparse recovery algorithm called signal space matching pursuit (SSMP). The key idea of the proposed SSMP algorithm is to sequentially investigate the support of jointly sparse vectors to minimize the subspace distance to the residual space. Our performance guarantee analysis indicates that SSMP accurately reconstructs any row $K$-sparse matrix of rank $r$ in the full row rank scenario if the sampling matrix $mathbf{A}$ satisfies $text{krank}(mathbf{A}) ge K+1$, which meets the fundamental minimum requirement on $mathbf{A}$ to ensure exact recovery. We also show that SSMP guarantees exact reconstruction in at most $K-r+lceil frac{r}{L} rceil$ iterations, provided that $mathbf{A}$ satisfies the restricted isometry property (RIP) of order $L(K-r)+r+1$ with $$delta_{L(K-r)+r+1} < max left { frac{sqrt{r}}{sqrt{K+frac{r}{4}}+sqrt{frac{r}{4}}}, frac{sqrt{L}}{sqrt{K}+1.15 sqrt{L}} right },$$ where $L$ is the number of indices chosen in each iteration. This implies that the requirement on the RIP constant becomes less restrictive when $r$ increases. Such behavior seems to be natural but has not been reported for most of conventional methods. We further show that if $r=1$, then by running more than $K$ iterations, the performance guarantee of SSMP can be improved to $delta_{lfloor 7.8K rfloor} le 0.155$. In addition, we show that under a suitable RIP condition, the reconstruction error of SSMP is upper bounded by a constant multiple of the noise power, which demonstrates the stability of SSMP under measurement noise. Finally, from extensive numerical experiments, we show that SSMP outperforms conventional joint sparse recovery algorithms both in noiseless and noisy scenarios.