ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Learning for Large-Scale Real-World ACARS and ADS-B Radio Signal Classification

67   0   0.0 ( 0 )
 نشر من قبل Shilian Zheng
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Radio signal classification has a very wide range of applications in the field of wireless communications and electromagnetic spectrum management. In recent years, deep learning has been used to solve the problem of radio signal classification and has achieved good results. However, the radio signal data currently used is very limited in scale. In order to verify the performance of the deep learning-based radio signal classification on real-world radio signal data, in this paper we conduct experiments on large-scale real-world ACARS and ADS-B signal data with sample sizes of 900,000 and 13,000,000, respectively, and with categories of 3,143 and 5,157 respectively. We use the same Inception-Residual neural network model structure for ACARS signal classification and ADS-B signal classification to verify the ability of a single basic deep neural network model structure to process different types of radio signals, i.e., communication bursts in ACARS and pulse bursts in ADS-B. We build an experimental system for radio signal deep learning experiments. Experimental results show that the signal classification accuracy of ACARS and ADS-B is 98.1% and 96.3%, respectively. When the signal-to-noise ratio (with injected additive white Gaussian noise) is greater than 9 dB, the classification accuracy is greater than 92%. These experimental results validate the ability of deep learning to classify large-scale real-world radio signals. The results of the transfer learning experiment show that the model trained on large-scale ADS-B datasets is more conducive to the learning and training of new tasks than the model trained on small-scale datasets.



قيم البحث

اقرأ أيضاً

The success of deep learning (DL) methods in the Brain-Computer Interfaces (BCI) field for classification of electroencephalographic (EEG) recordings has been restricted by the lack of large datasets. Privacy concerns associated with EEG signals limi t the possibility of constructing a large EEG-BCI dataset by the conglomeration of multiple small ones for jointly training machine learning models. Hence, in this paper, we propose a novel privacy-preserving DL architecture named federated transfer learning (FTL) for EEG classification that is based on the federated learning framework. Working with the single-trial covariance matrix, the proposed architecture extracts common discriminative information from multi-subject EEG data with the help of domain adaptation techniques. We evaluate the performance of the proposed architecture on the PhysioNet dataset for 2-class motor imagery classification. While avoiding the actual data sharing, our FTL approach achieves 2% higher classification accuracy in a subject-adaptive analysis. Also, in the absence of multi-subject data, our architecture provides 6% better accuracy compared to other state-of-the-art DL architectures.
Radio signal classification has a very wide range of applications in cognitive radio networks and electromagnetic spectrum monitoring. In this article, we consider scenarios where multiple nodes in the network participate in cooperative classificatio n. We propose cooperative radio signal classification methods based on deep learning for decision fusion, signal fusion and feature fusion, respectively. We analyze the performance of these methods through simulation experiments. We conclude the article with a discussion of research challenges and open problems.
A Deep Neural Network is applied to classify physical signatures obtained from physical sensor measurements of running gasoline and diesel-powered vehicles and other devices. The classification provides information on the target identities as to vehi cle type and even vehicle model. The physical measurements include acoustic, acceleration (vibration), geophonic, and magnetic.
We designed and implemented a deep learning based RF signal classifier on the Field Programmable Gate Array (FPGA) of an embedded software-defined radio platform, DeepRadio, that classifies the signals received through the RF front end to different m odulation types in real time and with low power. This classifier implementation successfully captures complex characteristics of wireless signals to serve critical applications in wireless security and communications systems such as identifying spoofing signals in signal authentication systems, detecting target emitters and jammers in electronic warfare (EW) applications, discriminating primary and secondary users in cognitive radio networks, interference hunting, and adaptive modulation. Empowered by low-power and low-latency embedded computing, the deep neural network runs directly on the FPGA fabric of DeepRadio, while maintaining classifier accuracy close to the software performance. We evaluated the performance when another SDR (USRP) transmits signals with different modulation types at different power levels and DeepRadio receives the signals and classifies them in real time on its FPGA. A smartphone with a mobile app is connected to DeepRadio to initiate the experiment and visualize the classification results. With real radio transmissions over the air, we show that the classifier implemented on DeepRadio achieves high accuracy with low latency (microsecond per sample) and low energy consumption (microJoule per sample), and this performance is not matched by other embedded platforms such as embedded graphics processing unit (GPU).
Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, and recently they are also introduced for radio signal modulation classification. In this paper, we pro pose a novel deep learning framework called SigNet, where a signal-to-matrix (S2M) operator is adopted to convert the original signal into a square matrix first and is co-trained with a follow-up CNN architecture for classification. This model is further accelerated by integrating 1D convolution operators, leading to the upgraded model SigNet2.0. The experiments on two signal datasets show that both SigNet and SigNet2.0 outperform a number of well-known baselines, achieving the state-of-the-art performance. Notably, they obtain significantly higher accuracy than 1D-ResNet and 2D-CNN (at most increasing 70.5%), while much faster than LSTM (at most saving 88.0% training time). More interestingly, our proposed models behave extremely well in few-shot learning when a small training data set is provided. They can achieve a relatively high accuracy even when 1% training data are kept, while other baseline models may lose their effectiveness much more quickly as the datasets get smaller. Such result suggests that SigNet/SigNet2.0 could be extremely useful in the situations where labeled signal data are difficult to obtain.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا