ترغب بنشر مسار تعليمي؟ اضغط هنا

End-to-end Learning of Waveform Generation and Detection for Radar Systems

63   0   0.0 ( 0 )
 نشر من قبل Wei Jiang
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

An end-to-end learning approach is proposed for the joint design of transmitted waveform and detector in a radar system. Detector and transmitted waveform are trained alternately: For a fixed transmitted waveform, the detector is trained using supervised learning so as to approximate the Neyman-Pearson detector; and for a fixed detector, the transmitted waveform is trained using reinforcement learning based on feedback from the receiver. No prior knowledge is assumed about the target and clutter models. Both transmitter and receiver are implemented as feedforward neural networks. Numerical results show that the proposed end-to-end learning approach is able to obtain a more robust radar performance in clutter and colored noise of arbitrary probability density functions as compared to conventional methods, and to successfully adapt the transmitted waveform to environmental conditions.

قيم البحث

اقرأ أيضاً

The problem of data-driven joint design of transmitted waveform and detector in a radar system is addressed in this paper. We propose two novel learning-based approaches to waveform and detector design based on end-to-end training of the radar system . The first approach consists of alternating supervised training of the detector for a fixed waveform and reinforcement learning of the transmitter for a fixed detector. In the second approach, the transmitter and detector are trained simultaneously. Various operational waveform constraints, such as peak-to-average-power ratio (PAR) and spectral compatibility, are incorporated into the design. Unlike traditional radar design methods that rely on rigid mathematical models with limited applicability, it is shown that radar learning can be robustified by training the detector with synthetic data generated from multiple statistical models of the environment. Theoretical considerations and results show that the proposed methods are capable of adapting the transmitted waveform to environmental conditions while satisfying design constraints.
End-to-end mission performance simulators (E2ES) are suitable tools to accelerate satellite mission development from concet to deployment. One core element of these E2ES is the generation of synthetic scenes that are observed by the various instrumen ts of an Earth Observation mission. The generation of these scenes rely on Radiative Transfer Models (RTM) for the simulation of light interaction with the Earth surface and atmosphere. However, the execution of advanced RTMs is impractical due to their large computation burden. Classical interpolation and statistical emulation methods of pre-computed Look-Up Tables (LUT) are therefore common practice to generate synthetic scenes in a reasonable time. This work evaluates the accuracy and computation cost of interpolation and emulation methods to sample the input LUT variable space. The results on MONDTRAN-based top-of-atmosphere radiance data show that Gaussian Process emulators produced more accurate output spectra than linear interpolation at a fraction of its time. It is concluded that emulation can function as a fast and more accurate alternative to interpolation for LUT parameter space sampling.
368 - Yankun Xu , Jie Yang , Shiqi Zhao 2021
An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroence phalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.
Recently, deep learning is considered to optimize the end-to-end performance of digital communication systems. The promise of learning a digital communication scheme from data is attractive, since this makes the scheme adaptable and precisely tunable to many scenarios and channel models. In this paper, we analyse a widely used neural network architecture and show that the training of the end-to-end architecture suffers from normalization errors introduced by an average power constraint. To solve this issue, we propose a modified architecture: shifting the batch slicing after the normalization layer. This approach meets the normalization constraints better, especially in the case of small batch sizes. Finally, we experimentally demonstrate that our modified architecture leads to significantly improved performance of trained models, even for large batch sizes where normalization constraints are more easily met.
We propose an autoencoder-based geometric shaping that learns a constellation robust to SNR and laser linewidth estimation errors. This constellation maintains shaping gain in mutual information (up to 0.3 bits/symbol) with respect to QAM over various SNR and laser linewidth values.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا