No Arabic abstract
This paper is concerned with the problem of recovering a structured signal from a relatively small number of corrupted random measurements. Sharp phase transitions have been numerically observed in practice when different convex programming procedures are used to solve this problem. This paper is devoted to presenting theoretical explanations for these phenomenons by employing some basic tools from Gaussian process theory. Specifically, we identify the precise locations of the phase transitions for both constrained and penalized recovery procedures. Our theoretical results show that these phase transitions are determined by some geometric measures of structure, e.g., the spherical Gaussian width of a tangent cone and the Gaussian (squared) distance to a scaled subdifferential. By utilizing the established phase transition theory, we further investigate the relationship between these two kinds of recovery procedures, which also reveals an optimal strategy (in the sense of Lagrange theory) for choosing the tradeoff parameter in the penalized recovery procedure. Numerical experiments are provided to verify our theoretical results.
This paper studies the problem of recovering a structured signal from a relatively small number of corrupted non-linear measurements. Assuming that signal and corruption are contained in some structure-promoted set, we suggest an extended Lasso to disentangle signal and corruption. We also provide conditions under which this recovery procedure can successfully reconstruct both signal and corruption.
This paper studies the problem of accurately recovering a structured signal from a small number of corrupted sub-Gaussian measurements. We consider three different procedures to reconstruct signal and corruption when different kinds of prior knowledge are available. In each case, we provide conditions (in terms of the number of measurements) for stable signal recovery from structured corruption with added unstructured noise. Our results theoretically demonstrate how to choose the regularization parameters in both partially and fully penalized recovery procedures and shed some light on the relationships among the three procedures. The key ingredient in our analysis is an extended matrix deviation inequality for isotropic sub-Gaussian matrices, which implies a tight lower bound for the restricted singular value of the extended sensing matrix. Numerical experiments are presented to verify our theoretical results.
A new family of operators, coined hierarchical measurement operators, is introduced and discussed within the well-known hierarchical sparse recovery framework. Such operator is a composition of block and mixing operations and notably contains the Kronecker product as a special case. Results on their hierarchical restricted isometry property (HiRIP) are derived, generalizing prior work on recovery of hierarchically sparse signals from Kronecker-structured linear measurements. Specifically, these results show that, very surprisingly, sparsity properties of the block and mixing part can be traded against each other. The measurement structure is well-motivated by a massive random access channel design in communication engineering. Numerical evaluation of user detection rates demonstrate the huge benefit of the theoretical framework.
Traditional sampling theories consider the problem of reconstructing an unknown signal $x$ from a series of samples. A prevalent assumption which often guarantees recovery from the given measurements is that $x$ lies in a known subspace. Recently, there has been growing interest in nonlinear but structured signal models, in which $x$ lies in a union of subspaces. In this paper we develop a general framework for robust and efficient recovery of such signals from a given set of samples. More specifically, we treat the case in which $x$ lies in a sum of $k$ subspaces, chosen from a larger set of $m$ possibilities. The samples are modelled as inner products with an arbitrary set of sampling functions. To derive an efficient and robust recovery algorithm, we show that our problem can be formulated as that of recovering a block-sparse vector whose non-zero elements appear in fixed blocks. We then propose a mixed $ell_2/ell_1$ program for block sparse recovery. Our main result is an equivalence condition under which the proposed convex algorithm is guaranteed to recover the original signal. This result relies on the notion of block restricted isometry property (RIP), which is a generalization of the standard RIP used extensively in the context of compressed sensing. Based on RIP we also prove stability of our approach in the presence of noise and modelling errors. A special case of our framework is that of recovering multiple measurement vectors (MMV) that share a joint sparsity pattern. Adapting our results to this context leads to new MMV recovery methods as well as equivalence conditions under which the entire set can be determined efficiently.
We consider the problem of clustering a graph $G$ into two communities by observing a subset of the vertex correlations. Specifically, we consider the inverse problem with observed variables $Y=B_G x oplus Z$, where $B_G$ is the incidence matrix of a graph $G$, $x$ is the vector of unknown vertex variables (with a uniform prior) and $Z$ is a noise vector with Bernoulli$(varepsilon)$ i.i.d. entries. All variables and operations are Boolean. This model is motivated by coding, synchronization, and community detection problems. In particular, it corresponds to a stochastic block model or a correlation clustering problem with two communities and censored edges. Without noise, exact recovery (up to global flip) of $x$ is possible if and only the graph $G$ is connected, with a sharp threshold at the edge probability $log(n)/n$ for ErdH{o}s-Renyi random graphs. The first goal of this paper is to determine how the edge probability $p$ needs to scale to allow exact recovery in the presence of noise. Defining the degree (oversampling) rate of the graph by $alpha =np/log(n)$, it is shown that exact recovery is possible if and only if $alpha >2/(1-2varepsilon)^2+ o(1/(1-2varepsilon)^2)$. In other words, $2/(1-2varepsilon)^2$ is the information theoretic threshold for exact recovery at low-SNR. In addition, an efficient recovery algorithm based on semidefinite programming is proposed and shown to succeed in the threshold regime up to twice the optimal rate. For a deterministic graph $G$, defining the degree rate as $alpha=d/log(n)$, where $d$ is the minimum degree of the graph, it is shown that the proposed method achieves the rate $alpha> 4((1+lambda)/(1-lambda)^2)/(1-2varepsilon)^2+ o(1/(1-2varepsilon)^2)$, where $1-lambda$ is the spectral gap of the graph $G$.