ترغب بنشر مسار تعليمي؟ اضغط هنا

End-to-end Learning from Spectrum Data: A Deep Learning approach for Wireless Signal Identification in Spectrum Monitoring applications

180   0   0.0 ( 0 )
 نشر من قبل Tarik Kazaz
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper presents end-to-end learning from spectrum data - an umbrella term for new sophisticated wireless signal identification approaches in spectrum monitoring applications based on deep neural networks. End-to-end learning allows to (i) automatically learn features directly from simple wireless signal representations, without requiring design of hand-crafted expert features like higher order cyclic moments, and (ii) train wireless signal classifiers in one end-to-end step which eliminates the need for complex multi-stage machine learning processing pipelines. The purpose of this article is to present the conceptual framework of end-to-end learning for spectrum monitoring and systematically introduce a generic methodology to easily design and implement wireless signal classifiers. Furthermore, we investigate the importance of the choice of wireless data representation to various spectrum monitoring tasks. In particular, two case studies are elaborated (i) modulation recognition and (ii) wireless technology interference detection. For each case study three convolutional neural networks are evaluated for the following wireless signal representations: temporal IQ data, the amplitude/phase representation and the frequency domain representation. From our analysis we prove that the wireless data representation impacts the accuracy depending on the specifics and similarities of the wireless signals that need to be differentiated, with different data representations resulting in accuracy variations of up to 29%. Experimental results show that using the amplitude/phase representation for recognizing modulation formats can lead to performance improvements up to 2% and 12% for medium to high SNR compared to IQ and frequency domain data, respectively. For the task of detecting interference, frequency domain representation outperformed amplitude/phase and IQ data representation up to 20%.



قيم البحث

اقرأ أيضاً

This paper looks into the technology classification problem for a distributed wireless spectrum sensing network. First, a new data-driven model for Automatic Modulation Classification (AMC) based on long short term memory (LSTM) is proposed. The mode l learns from the time domain amplitude and phase information of the modulation schemes present in the training data without requiring expert features like higher order cyclic moments. Analyses show that the proposed model yields an average classification accuracy of close to 90% at varying SNR conditions ranging from 0dB to 20dB. Further, we explore the utility of this LSTM model for a variable symbol rate scenario. We show that a LSTM based model can learn good representations of variable length time domain sequences, which is useful in classifying modulation signals with different symbol rates. The achieved accuracy of 75% on an input sample length of 64 for which it was not trained, substantiates the representation power of the model. To reduce the data communication overhead from distributed sensors, the feasibility of classification using averaged magnitude spectrum data, or online classification on the low cost sensors is studied. Furthermore, quantized realizations of the proposed models are analyzed for deployment on sensors with low processing power.
139 - Damao Yang , Sihan Peng , He Huang 2018
We design a dispatch system to improve the peak service quality of video on demand (VOD). Our system predicts the hot videos during the peak hours of the next day based on the historical requests, and dispatches to the content delivery networks (CDNs ) at the previous off-peak time. In order to scale to billions of videos, we build the system with two neural networks, one for video clustering and the other for dispatch policy developing. The clustering network employs autoencoder layers and reduces the video number to a fixed value. The policy network employs fully connected layers and ranks the clustered videos with dispatch probabilities. The two networks are coupled with weight-sharing temporal layers, which analyze the video request sequences with convolutional and recurrent modules. Therefore, the clustering and dispatch tasks are trained in an end-to-end mechanism. The real-world results show that our approach achieves an average prediction accuracy of 17%, compared with 3% from the present baseline method, for the same amount of dispatches.
368 - Yankun Xu , Jie Yang , Shiqi Zhao 2021
An accurate seizure prediction system enables early warnings before seizure onset of epileptic patients. It is extremely important for drug-refractory patients. Conventional seizure prediction works usually rely on features extracted from Electroence phalography (EEG) recordings and classification algorithms such as regression or support vector machine (SVM) to locate the short time before seizure onset. However, such methods cannot achieve high-accuracy prediction due to information loss of the hand-crafted features and the limited classification ability of regression and SVM algorithms. We propose an end-to-end deep learning solution using a convolutional neural network (CNN) in this paper. One and two dimensional kernels are adopted in the early- and late-stage convolution and max-pooling layers, respectively. The proposed CNN model is evaluated on Kaggle intracranial and CHB-MIT scalp EEG datasets. Overall sensitivity, false prediction rate, and area under receiver operating characteristic curve reaches 93.5%, 0.063/h, 0.981 and 98.8%, 0.074/h, 0.988 on two datasets respectively. Comparison with state-of-the-art works indicates that the proposed model achieves exceeding prediction performance.
Many fields are now snowed under with an avalanche of data, which raises considerable challenges for computer scientists. Meanwhile, robotics (among other fields) can often only use a few dozen data points because acquiring them involves a process th at is expensive or time-consuming. How can an algorithm learn with only a few data points?
74 - Christophe Moy 2019
This paper describes the principles and implementation results of reinforcement learning algorithms on IoT devices for radio collision mitigation in ISM unlicensed bands. Learning is here used to improve both the IoT network capability to support a l arger number of objects as well as the autonomy of IoT devices. We first illustrate the efficiency of the proposed approach in a proof-of-concept based on USRP software radio platforms operating on real radio signals. It shows how collisions with other RF signals present in the ISM band are diminished for a given IoT device. Then we describe the first implementation of learning algorithms on LoRa devices operating in a real LoRaWAN network, that we named IoTligent. The proposed solution adds neither processing overhead so that it can be ran in the IoT devices, nor network overhead so that no change is required to LoRaWAN. Real life experiments have been done in a realistic LoRa network and they show that IoTligent device battery life can be extended by a factor 2 in the scenarios we faced during our experiment.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا