Do you want to publish a course? Click here

Learning from Natural Noise to Denoise Micro-Doppler Spectrogram

66   0   0.0 ( 0 )
 Added by Shelly Vishwakarma
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Micro-Doppler analysis has become increasingly popular in recent years owning to the ability of the technique to enhance classification strategies. Applications include recognising everyday human activities, distinguishing drone from birds, and identifying different types of vehicles. However, noisy time-frequency spectrograms can significantly affect the performance of the classifier and must be tackled using appropriate denoising algorithms. In recent years, deep learning algorithms have spawned many deep neural network-based denoising algorithms. For these methods, noise modelling is the most important part and is used to assist in training. In this paper, we decompose the problem and propose a novel denoising scheme: first, a Generative Adversarial Network (GAN) is used to learn the noise distribution and correlation from the real-world environment; then, a simulator is used to generate clean Micro-Doppler spectrograms; finally, the generated noise and clean simulation data are combined as the training data to train a Convolutional Neural Network (CNN) denoiser. In experiments, we qualitatively and quantitatively analyzed this procedure on both simulation and measurement data. Besides, the idea of learning from natural noise can be applied well to other existing frameworks and demonstrate greater performance than other noise models.



rate research

Read More

Micro-Doppler signatures contain considerable information about target dynamics. However, the radar sensing systems are easily affected by noisy surroundings, resulting in uninterpretable motion patterns on the micro-Doppler spectrogram. Meanwhile, radar returns often suffer from multipath, clutter and interference. These issues lead to difficulty in, for example motion feature extraction, activity classification using micro Doppler signatures ($mu$-DS), etc. In this paper, we propose a latent feature-wise mapping strategy, called Feature Mapping Network (FMNet), to transform measured spectrograms so that they more closely resemble the output from a simulation under the same conditions. Based on measured spectrogram and the matched simulated data, our framework contains three parts: an Encoder which is used to extract latent representations/features, a Decoder outputs reconstructed spectrogram according to the latent features, and a Discriminator minimizes the distance of latent features of measured and simulated data. We demonstrate the FMNet with six activities data and two experimental scenarios, and final results show strong enhanced patterns and can keep actual motion information to the greatest extent. On the other hand, we also propose a novel idea which trains a classifier with only simulated data and predicts new measured samples after cleaning them up with the FMNet. From final classification results, we can see significant improvements.
We propose an audio-to-audio neural network model that learns to denoise old music recordings. Our model internally converts its input into a time-frequency representation by means of a short-time Fourier transform (STFT), and processes the resulting complex spectrogram using a convolutional neural network. The network is trained with both reconstruction and adversarial objectives on a synthetic noisy music dataset, which is created by mixing clean music with real noise samples extracted from quiet segments of old recordings. We evaluate our method quantitatively on held-out test examples of the synthetic dataset, and qualitatively by human rating on samples of actual historical recordings. Our results show that the proposed method is effective in removing noise, while preserving the quality and details of the original music.
Channel state information (CSI) feedback is critical for frequency division duplex (FDD) massive multi-input multi-output (MIMO) systems. Most conventional algorithms are based on compressive sensing (CS) and are highly dependent on the level of channel sparsity. To address the issue, a recent approach adopts deep learning (DL) to compress CSI into a codeword with low dimensionality, which has shown much better performance than the CS algorithms when feedback link is perfect. In practical scenario, however, there exists various interference and non-linear effect. In this article, we design a DL-based denoise network, called DNNet, to improve the performance of channel feedback. Numerical results show that the DL-based feedback algorithm with the proposed DNNet has superior performance over the existing algorithms, especially at low signal-to-noise ratio (SNR).
In this paper, we present an algorithm for determining a curve on the earths terrain on which a stationary emitter must lie according to a single Doppler shift measured on an unmanned aerial vehicle (UAV) or a low earth orbit satellite (LEOS). The mobile vehicle measures the Doppler shift and uses it to build equations for a particular right circular cone according to the Doppler shift and the vehicles velocity, then determines a curve consisting of points which represents the intersections of the cone with an ellipsoid that approximately describes the earths surface. The intersection points of the cone with the ellipsoid are mapped into a digital terrain data set, namely Digital Terrain Elevation Data (DTED), to generate the intersection points on the earths terrain. The work includes consideration of the possibility that the rotation of the earth could affect the Doppler shift, and of the errors resulting from the non-constant refractive index of the atmosphere and from lack of precise knowledge of the transmitter frequency.
194 - Wenbo Cao , Weiwei Zhang 2020
Machine learning of partial differential equations from data is a potential breakthrough to solve the lack of physical equations in complex dynamic systems, but because numerical differentiation is ill-posed to noise data, noise has become the biggest obstacle in the application of partial differential equation identification method. To overcome this problem, we propose Frequency Domain Identification method based on Fourier transforms, which effectively eliminates the influence of noise by using the low frequency component of frequency domain data to identify partial differential equations in frequency domain. We also propose a new sparse identification criterion, which can accurately identify the terms in the equation from low signal-to-noise ratio data. Through identifying a variety of canonical equations spanning a number of scientific domains, the proposed method is proved to have high accuracy and robustness for equation structure and parameters identification for low signal-to-noise ratio data. The method provides a promising technique to discover potential partial differential equations from noisy experimental data.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا