ترغب بنشر مسار تعليمي؟ اضغط هنا

Vectorized Hankel Lift: A Convex Approach for Blind Super-Resolution of Point Sources

173   0   0.0 ( 0 )
 نشر من قبل Ke Wei
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We consider the problem of resolving $ r$ point sources from $n$ samples at the low end of the spectrum when point spread functions (PSFs) are not known. Assuming that the spectrum samples of the PSFs lie in low dimensional subspace (let $s$ denote the dimension), this problem can be reformulated as a matrix recovery problem, followed by location estimation. By exploiting the low rank structure of the vectorized Hankel matrix associated with the target matrix, a convex approach called Vectorized Hankel Lift is proposed for the matrix recovery. It is shown that $ngtrsim rslog^4 n$ samples are sufficient for Vectorized Hankel Lift to achieve the exact recovery. For the location retrieval from the matrix, applying the single snapshot MUSIC method within the vectorized Hankel lift framework corresponds to the spatial smoothing technique proposed to improve the performance of the MMV MUSIC for the direction-of-arrival (DOA) estimation.

قيم البحث

اقرأ أيضاً

Computational sensing strategies often suffer from calibration errors in the physical implementation of their ideal sensing models. Such uncertainties are typically addressed by using multiple, accurately chosen training signals to recover the missin g information on the sensing model, an approach that can be resource-consuming and cumbersome. Conversely, blind calibration does not employ any training signal, but corresponds to a bilinear inverse problem whose algorithmic solution is an open issue. We here address blind calibration as a non-convex problem for linear random sensing models, in which we aim to recover an unknown signal from its projections on sub-Gaussian random vectors, each subject to an unknown positive multiplicative factor (or gain). To solve this optimisation problem we resort to projected gradient descent starting from a suitable, carefully chosen initialisation point. An analysis of this algorithm allows us to show that it converges to the exact solution provided a sample complexity requirement is met, i.e., relating convergence to the amount of information collected during the sensing process. Interestingly, we show that this requirement grows linearly (up to log factors) in the number of unknowns of the problem. This sample complexity is found both in absence of prior information, as well as when subspace priors are available for both the signal and gains, allowing a further reduction of the number of observations required for our recovery guarantees to hold. Moreover, in the presence of noise we show how our descent algorithm yields a solution whose accuracy degrades gracefully with the amount of noise affecting the measurements. Finally, we present some numerical experiments in an imaging context, where our algorithm allows for a simple solution to blind calibration of the gains in a sensor array.
This work considers a super-resolution framework for overcomplete tensor decomposition. Specifically, we view tensor decomposition as a super-resolution problem of recovering a sum of Dirac measures on the sphere and solve it by minimizing a continuo us analog of the $ell_1$ norm on the space of measures. The optimal value of this optimization defines the tensor nuclear norm. Similar to the separation condition in the super-resolution problem, by explicitly constructing a dual certificate, we develop incoherence conditions of the tensor factors so that they form the unique optimal solution of the continuous analog of $ell_1$ norm minimization. Remarkably, the derived incoherence conditions are satisfied with high probability by random tensor factors uniformly distributed on the sphere, implying global identifiability of random tensor factors.
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter v ector. The exhaustive search inspecting each of the $nchoose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
Sparse blind deconvolution is the problem of estimating the blur kernel and sparse excitation, both of which are unknown. Considering a linear convolution model, as opposed to the standard circular convolution model, we derive a sufficient condition for stable deconvolution. The columns of the linear convolution matrix form a Riesz basis with the tightness of the Riesz bounds determined by the autocorrelation of the blur kernel. Employing a Bayesian framework results in a non-convex, non-smooth cost function consisting of an $ell_2$ data-fidelity term and a sparsity promoting $ell_p$-norm ($0 le p le 1$) regularizer. Since the $ell_p$-norm is not differentiable at the origin, we employ an $epsilon$-regularized $ell_p$-norm as a surrogate. The data term is also non-convex in both the blur kernel and excitation. An iterative scheme termed alternating minimization (Alt. Min.) $ell_p-ell_2$ projections algorithm (ALPA) is developed for optimization of the $epsilon$-regularized cost function. Further, we demonstrate that, in every iteration, the $epsilon$-regularized cost function is non-increasing and more importantly, bounds the original $ell_p$-norm-based cost. Due to non-convexity of the cost, the accuracy of estimation is largely influenced by the initialization. Considering regularized least-squares estimate as the initialization, we analyze how the initialization errors are concentrated, first in Gaussian noise, and then in bounded noise, the latter case resulting in tighter bounds. Comparisons with state-of-the-art blind deconvolution algorithms show that the deconvolution accuracy is higher in case of ALPA. In the context of natural speech signals, ALPA results in accurate deconvolution of a voiced speech segment into a sparse excitation and smooth vocal tract response.
In a variety of fields, in particular those involving imaging and optics, we often measure signals whose phase is missing or has been irremediably distorted. Phase retrieval attempts to recover the phase information of a signal from the magnitude of its Fourier transform to enable the reconstruction of the original signal. Solving the phase retrieval problem is equivalent to recovering a signal from its auto-correlation function. In this paper, we assume the original signal to be sparse; this is a natural assumption in many applications, such as X-ray crystallography, speckle imaging and blind channel estimation. We propose an algorithm that resolves the phase retrieval problem in three stages: i) we leverage the finite rate of innovation sampling theory to super-resolve the auto-correlation function from a limited number of samples, ii) we design a greedy algorithm that identifies the locations of a sparse solution given the super-resolved auto-correlation function, iii) we recover the amplitudes of the atoms given their locations and the measured auto-correlation function. Unlike traditional approaches that recover a discrete approximation of the underlying signal, our algorithm estimates the signal on a continuous domain, which makes it the first of its kind. Along with the algorithm, we derive its performance bound with a theoretical analysis and propose a set of enhancements to improve its computational complexity and noise resilience. Finally, we demonstrate the benefits of the proposed method via a comparison against Charge Flipping, a notable algorithm in crystallography.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا