ترغب بنشر مسار تعليمي؟ اضغط هنا

Through the Haze: a Non-Convex Approach to Blind Gain Calibration for Linear Random Sensing Models

71   0   0.0 ( 0 )
 نشر من قبل Laurent Jacques
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Computational sensing strategies often suffer from calibration errors in the physical implementation of their ideal sensing models. Such uncertainties are typically addressed by using multiple, accurately chosen training signals to recover the missing information on the sensing model, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not employ any training signal, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here address blind calibration as a non-convex problem for linear random sensing models, in which we aim to recover an unknown signal from its projections on sub-Gaussian random vectors, each subject to an unknown positive multiplicative factor (or gain). To solve this optimisation problem we resort to projected gradient descent starting from a suitable, carefully chosen initialisation point. An analysis of this algorithm allows us to show that it converges to the exact solution provided a sample complexity requirement is met, i.e., relating convergence to the amount of information collected during the sensing process. Interestingly, we show that this requirement grows linearly (up to log factors) in the number of unknowns of the problem. This sample complexity is found both in absence of prior information, as well as when subspace priors are available for both the signal and gains, allowing a further reduction of the number of observations required for our recovery guarantees to hold. Moreover, in the presence of noise we show how our descent algorithm yields a solution whose accuracy degrades gracefully with the amount of noise affecting the measurements. Finally, we present some numerical experiments in an imaging context, where our algorithm allows for a simple solution to blind calibration of the gains in a sensor array.



قيم البحث

اقرأ أيضاً

The realisation of sensing modalities based on the principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator, which is necessary during signal recovery, and its actual physical implemen tation, which can amply differ from the assumed model. In this paper we tackle the bilinear inverse problem of recovering a sparse input signal and some unknown, unstructured multiplicative factors affecting the sensors that capture each compressive measurement. Our methodology relies on collecting a few snapshots under new draws of the sensing operator, and applying a greedy algorithm based on projected gradient descent and the principles of iterative hard thresholding. We explore empirically the sample complexity requirements of this algorithm by testing its phase transition, and show in a practically relevant instance of this problem for compressive imaging that the exact solution can be obtained with only a few snapshots.
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter v ector. The exhaustive search inspecting each of the $nchoose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
We consider the problem of resolving $ r$ point sources from $n$ samples at the low end of the spectrum when point spread functions (PSFs) are not known. Assuming that the spectrum samples of the PSFs lie in low dimensional subspace (let $s$ denote t he dimension), this problem can be reformulated as a matrix recovery problem, followed by location estimation. By exploiting the low rank structure of the vectorized Hankel matrix associated with the target matrix, a convex approach called Vectorized Hankel Lift is proposed for the matrix recovery. It is shown that $ngtrsim rslog^4 n$ samples are sufficient for Vectorized Hankel Lift to achieve the exact recovery. For the location retrieval from the matrix, applying the single snapshot MUSIC method within the vectorized Hankel lift framework corresponds to the spatial smoothing technique proposed to improve the performance of the MMV MUSIC for the direction-of-arrival (DOA) estimation.
Sparse blind deconvolution is the problem of estimating the blur kernel and sparse excitation, both of which are unknown. Considering a linear convolution model, as opposed to the standard circular convolution model, we derive a sufficient condition for stable deconvolution. The columns of the linear convolution matrix form a Riesz basis with the tightness of the Riesz bounds determined by the autocorrelation of the blur kernel. Employing a Bayesian framework results in a non-convex, non-smooth cost function consisting of an $ell_2$ data-fidelity term and a sparsity promoting $ell_p$-norm ($0 le p le 1$) regularizer. Since the $ell_p$-norm is not differentiable at the origin, we employ an $epsilon$-regularized $ell_p$-norm as a surrogate. The data term is also non-convex in both the blur kernel and excitation. An iterative scheme termed alternating minimization (Alt. Min.) $ell_p-ell_2$ projections algorithm (ALPA) is developed for optimization of the $epsilon$-regularized cost function. Further, we demonstrate that, in every iteration, the $epsilon$-regularized cost function is non-increasing and more importantly, bounds the original $ell_p$-norm-based cost. Due to non-convexity of the cost, the accuracy of estimation is largely influenced by the initialization. Considering regularized least-squares estimate as the initialization, we analyze how the initialization errors are concentrated, first in Gaussian noise, and then in bounded noise, the latter case resulting in tighter bounds. Comparisons with state-of-the-art blind deconvolution algorithms show that the deconvolution accuracy is higher in case of ALPA. In the context of natural speech signals, ALPA results in accurate deconvolution of a voiced speech segment into a sparse excitation and smooth vocal tract response.
This work considers two popular minimization problems: (i) the minimization of a general convex function $f(mathbf{X})$ with the domain being positive semi-definite matrices; (ii) the minimization of a general convex function $f(mathbf{X})$ regulariz ed by the matrix nuclear norm $|mathbf{X}|_*$ with the domain being general matrices. Despite their optimal statistical performance in the literature, these two optimization problems have a high computational complexity even when solved using tailored fast convex solvers. To develop faster and more scalable algorithms, we follow the proposal of Burer and Monteiro to factor the low-rank variable $mathbf{X} = mathbf{U}mathbf{U}^top $ (for semi-definite matrices) or $mathbf{X}=mathbf{U}mathbf{V}^top $ (for general matrices) and also replace the nuclear norm $|mathbf{X}|_*$ with $(|mathbf{U}|_F^2+|mathbf{V}|_F^2)/2$. In spite of the non-convexity of the resulting factored formulations, we prove that each critical point either corresponds to the global optimum of the original convex problems or is a strict saddle where the Hessian matrix has a strictly negative eigenvalue. Such a nice geometric structure of the factored formulations allows many local search algorithms to find a global optimizer even with random initializations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا