No Arabic abstract
Pulse Doppler radars suffer from range-Doppler ambiguity that translates into a trade-off between maximal unambiguous range and velocity. Several techniques, like the multiple PRFs (MPRF) method, have been proposed to mitigate this problem. The drawback of the MPRF method is that the received samples are not processed jointly, decreasing signal to noise ratio (SNR). To overcome the drawbacks of MPRF, we employ a random pulse phase coding approach to increase the unambiguous range region while preserving the unambiguous Doppler region. Our method encodes each pulse with a random phase, varying from pulse to pulse, and then processes the received samples jointly to resolve the range ambiguity. This technique increases the SNR through joint processing without the parameter matching procedures required in the MPRF method. The recovery algorithm is designed based on orthogonal matching pursuit so that it can be directly applied to either Nyquist or sub-Nyquist samples. The unambiguous delay-Doppler recovery condition is derived with compressed sensing theory in noiseless settings. In particular, an upper bound to the number of targets is given, with respect to the number of samples in each pulse repetition interval and the number of transmit pulses. Simulations show that in both regimes of Nyquist and sub-Nyquist samples our method outperforms the popular MPRF approach in terms of hit rate.
In this paper, we present an algorithm for determining a curve on the earths terrain on which a stationary emitter must lie according to a single Doppler shift measured on an unmanned aerial vehicle (UAV) or a low earth orbit satellite (LEOS). The mobile vehicle measures the Doppler shift and uses it to build equations for a particular right circular cone according to the Doppler shift and the vehicles velocity, then determines a curve consisting of points which represents the intersections of the cone with an ellipsoid that approximately describes the earths surface. The intersection points of the cone with the ellipsoid are mapped into a digital terrain data set, namely Digital Terrain Elevation Data (DTED), to generate the intersection points on the earths terrain. The work includes consideration of the possibility that the rotation of the earth could affect the Doppler shift, and of the errors resulting from the non-constant refractive index of the atmosphere and from lack of precise knowledge of the transmitter frequency.
Micro-Doppler analysis has become increasingly popular in recent years owning to the ability of the technique to enhance classification strategies. Applications include recognising everyday human activities, distinguishing drone from birds, and identifying different types of vehicles. However, noisy time-frequency spectrograms can significantly affect the performance of the classifier and must be tackled using appropriate denoising algorithms. In recent years, deep learning algorithms have spawned many deep neural network-based denoising algorithms. For these methods, noise modelling is the most important part and is used to assist in training. In this paper, we decompose the problem and propose a novel denoising scheme: first, a Generative Adversarial Network (GAN) is used to learn the noise distribution and correlation from the real-world environment; then, a simulator is used to generate clean Micro-Doppler spectrograms; finally, the generated noise and clean simulation data are combined as the training data to train a Convolutional Neural Network (CNN) denoiser. In experiments, we qualitatively and quantitatively analyzed this procedure on both simulation and measurement data. Besides, the idea of learning from natural noise can be applied well to other existing frameworks and demonstrate greater performance than other noise models.
Several coded exposure techniques have been proposed for acquiring high frame rate videos at low bandwidth. Most recently, a Coded-2-Bucket camera has been proposed that can acquire two compressed measurements in a single exposure, unlike previously proposed coded exposure techniques, which can acquire only a single measurement. Although two measurements are better than one for an effective video recovery, we are yet unaware of the clear advantage of two measurements, either quantitatively or qualitatively. Here, we propose a unified learning-based framework to make such a qualitative and quantitative comparison between those which capture only a single coded image (Flutter Shutter, Pixel-wise coded exposure) and those that capture two measurements per exposure (C2B). Our learning-based framework consists of a shift-variant convolutional layer followed by a fully convolutional deep neural network. Our proposed unified framework achieves the state of the art reconstructions in all three sensing techniques. Further analysis shows that when most scene points are static, the C2B sensor has a significant advantage over acquiring a single pixel-wise coded measurement. However, when most scene points undergo motion, the C2B sensor has only a marginal benefit over the single pixel-wise coded exposure measurement.
In matrix recovery from random linear measurements, one is interested in recovering an unknown $M$-by-$N$ matrix $X_0$ from $n<MN$ measurements $y_i=Tr(A_i^T X_0)$ where each $A_i$ is an $M$-by-$N$ measurement matrix with i.i.d random entries, $i=1,ldots,n$. We present a novel matrix recovery algorithm, based on approximate message passing, which iteratively applies an optimal singular value shrinker -- a nonconvex nonlinearity tailored specifically for matrix estimation. Our algorithm typically converges exponentially fast, offering a significant speedup over previously suggested matrix recovery algorithms, such as iterative solvers for Nuclear Norm Minimization (NNM). It is well known that there is a recovery tradeoff between the information content of the object $X_0$ to be recovered (specifically, its matrix rank $r$) and the number of linear measurements $n$ from which recovery is to be attempted. The precise tradeoff between $r$ and $n$, beyond which recovery by a given algorithm becomes possible, traces the so-called phase transition curve of that algorithm in the $(r,n)$ plane. The phase transition curve of our algorithm is noticeably better than that of NNM. Interestingly, it is close to the information-theoretic lower bound for the minimal number of measurements needed for matrix recovery, making it not only state-of-the-art in terms of convergence rate, but also near-optimal in terms of the matrices it successfully recovers.
The multipath radio channel is considered to have a non-bandlimited channel impulse response. Therefore, it is challenging to achieve high resolution time-delay (TD) estimation of multipath components (MPCs) from bandlimited observations of communication signals. It this paper, we consider the problem of multiband channel sampling and TD estimation of MPCs. We assume that the nonideal multibranch receiver is used for multiband sampling, where the noise is nonuniform across the receiver branches. The resulting data model of Hankel matrices formed from acquired samples has multiple shift-invariance structures, and we propose an algorithm for TD estimation using weighted subspace fitting. The subspace fitting is formulated as a separable nonlinear least squares (NLS) problem, and it is solved using a variable projection method. The proposed algorithm supports high resolution TD estimation from an arbitrary number of bands, and it allows for nonuniform noise across the bands. Numerical simulations show that the algorithm almost attains the Cramer Rao Lower Bound, and it outperforms previously proposed methods such as multiresolution TOA, MI-MUSIC, and ESPRIT.