No Arabic abstract
The realisation of sensing modalities based on the principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator, which is necessary during signal recovery, and its actual physical implementation, which can amply differ from the assumed model. In this paper we tackle the bilinear inverse problem of recovering a sparse input signal and some unknown, unstructured multiplicative factors affecting the sensors that capture each compressive measurement. Our methodology relies on collecting a few snapshots under new draws of the sensing operator, and applying a greedy algorithm based on projected gradient descent and the principles of iterative hard thresholding. We explore empirically the sample complexity requirements of this algorithm by testing its phase transition, and show in a practically relevant instance of this problem for compressive imaging that the exact solution can be obtained with only a few snapshots.
Computational sensing strategies often suffer from calibration errors in the physical implementation of their ideal sensing models. Such uncertainties are typically addressed by using multiple, accurately chosen training signals to recover the missing information on the sensing model, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not employ any training signal, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here address blind calibration as a non-convex problem for linear random sensing models, in which we aim to recover an unknown signal from its projections on sub-Gaussian random vectors, each subject to an unknown positive multiplicative factor (or gain). To solve this optimisation problem we resort to projected gradient descent starting from a suitable, carefully chosen initialisation point. An analysis of this algorithm allows us to show that it converges to the exact solution provided a sample complexity requirement is met, i.e., relating convergence to the amount of information collected during the sensing process. Interestingly, we show that this requirement grows linearly (up to log factors) in the number of unknowns of the problem. This sample complexity is found both in absence of prior information, as well as when subspace priors are available for both the signal and gains, allowing a further reduction of the number of observations required for our recovery guarantees to hold. Moreover, in the presence of noise we show how our descent algorithm yields a solution whose accuracy degrades gracefully with the amount of noise affecting the measurements. Finally, we present some numerical experiments in an imaging context, where our algorithm allows for a simple solution to blind calibration of the gains in a sensor array.
This paper analyzes the impact of non-Gaussian multipath component (MPC) amplitude distributions on the performance of Compressed Sensing (CS) channel estimators for OFDM systems. The number of dominant MPCs that any CS algorithm needs to estimate in order to accurately represent the channel is characterized. This number relates to a Compressibility Index (CI) of the channel that depends on the fourth moment of the MPC amplitude distribution. A connection between the Mean Squared Error (MSE) of any CS estimation algorithm and the MPC amplitude distribution fourth moment is revealed that shows a smaller number of MPCs is needed to well-estimate channels when these components have large fourth moment amplitude gains. The analytical results are validated via simulations for channels with lognormal MPCs such as the NYU mmWave channel model. These simulations show that when the MPC amplitude distribution has a high fourth moment, the well known CS algorithm of Orthogonal Matching Pursuit performs almost identically to the Basis Pursuit De-Noising algorithm with a much lower computational cost.
Compressed sensing (CS) or sparse signal reconstruction (SSR) is a signal processing technique that exploits the fact that acquired data can have a sparse representation in some basis. One popular technique to reconstruct or approximate the unknown sparse signal is the iterative hard thresholding (IHT) which however performs very poorly under non-Gaussian noise conditions or in the face of outliers (gross errors). In this paper, we propose a robust IHT method based on ideas from $M$-estimation that estimates the sparse signal and the scale of the error distribution simultaneously. The method has a negligible performance loss compared to IHT under Gaussian noise, but superior performance under heavy-tailed non-Gaussian noise conditions.
In this paper, based on a successively accuracy-increasing approximation of the $ell_0$ norm, we propose a new algorithm for recovery of sparse vectors from underdetermined measurements. The approximations are realized with a certain class of concave functions that aggressively induce sparsity and their closeness to the $ell_0$ norm can be controlled. We prove that the series of the approximations asymptotically coincides with the $ell_1$ and $ell_0$ norms when the approximation accuracy changes from the worst fitting to the best fitting. When measurements are noise-free, an optimization scheme is proposed which leads to a number of weighted $ell_1$ minimization programs, whereas, in the presence of noise, we propose two iterative thresholding methods that are computationally appealing. A convergence guarantee for the iterative thresholding method is provided, and, for a particular function in the class of the approximating functions, we derive the closed-form thresholding operator. We further present some theoretical analyses via the restricted isometry, null space, and spherical section properties. Our extensive numerical simulations indicate that the proposed algorithm closely follows the performance of the oracle estimator for a range of sparsity levels wider than those of the state-of-the-art algorithms.
Compressed sensing (CS) with prior information concerns the problem of reconstructing a sparse signal with the aid of a similar signal which is known beforehand. We consider a new approach to integrate the prior information into CS via maximizing the correlation between the prior knowledge and the desired signal. We then present a geometric analysis for the proposed method under sub-Gaussian measurements. Our results reveal that if the prior information is good enough, then the proposed approach can improve the performance of the standard CS. Simulations are provided to verify our results.