Do you want to publish a course? Click here

Spectrum Sensing Based on Deep Learning Classification for Cognitive Radios

146   0   0.0 ( 0 )
 Added by Shilian Zheng
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Spectrum sensing is a key technology for cognitive radios. We present spectrum sensing as a classification problem and propose a sensing method based on deep learning classification. We normalize the received signal power to overcome the effects of noise power uncertainty. We train the model with as many types of signals as possible as well as noise data to enable the trained network model to adapt to untrained new signals. We also use transfer learning strategies to improve the performance for real-world signals. Extensive experiments are conducted to evaluate the performance of this method. The simulation results show that the proposed method performs better than two traditional spectrum sensing methods, i.e., maximum-minimum eigenvalue ratio-based method and frequency domain entropy-based method. In addition, the experimental results of the new untrained signal types show that our method can adapt to the detection of these new signals. Furthermore, the real-world signal detection experiment results show that the detection performance can be further improved by transfer learning. Finally, experiments under colored noise show that our proposed method has superior detection performance under colored noise, while the traditional methods have a significant performance degradation, which further validate the superiority of our method.



rate research

Read More

Cognitive radios sense the radio spectrum in order to find unused frequency bands and use them in an agile manner. Transmission by the primary user must be detected reliably even in the low signal-to-noise ratio (SNR) regime and in the face of shadowing and fading. Communication signals are typically cyclostationary, and have many periodic statistical properties related to the symbol rate, the coding and modulation schemes as well as the guard periods, for example. These properties can be exploited in designing a detector, and for distinguishing between the primary and secondary users signals. In this paper, a generalized likelihood ratio test (GLRT) for detecting the presence of cyclostationarity using multiple cyclic frequencies is proposed. Distributed decision making is employed by combining the quantized local test statistics from many secondary users. User cooperation allows for mitigating the effects of shadowing and provides a larger footprint for the cognitive radio system. Simulation examples demonstrate the resulting performance gains in the low SNR regime and the benefits of cooperative detection.
Spectrum sensing is one of the enabling functionalities for cognitive radio (CR) systems to operate in the spectrum white space. To protect the primary incumbent users from interference, the CR is required to detect incumbent signals at very low signal-to-noise ratio (SNR). In this paper, we present a spectrum sensing technique based on correlating spectra for detection of television (TV) broadcasting signals. The basic strategy is to correlate the periodogram of the received signal with the a priori known spectral features of the primary signal. We show that according to the Neyman-Pearson criterion, this spectral correlation-based sensing technique is asymptotically optimal at very low SNR and with a large sensing time. From the system design perspective, we analyze the effect of the spectral features on the spectrum sensing performance. Through the optimization analysis, we obtain useful insights on how to choose effective spectral features to achieve reliable sensing. Simulation results show that the proposed sensing technique can reliably detect analog and digital TV signals at SNR as low as -20 dB.
Moving loads such as cars and trains are very useful sources of seismic waves, which can be analyzed to retrieve information on the seismic velocity of subsurface materials using the techniques of ambient noise seismology. This information is valuable for a variety of applications such as geotechnical characterization of the near-surface, seismic hazard evaluation, and groundwater monitoring. However, for such processes to converge quickly, data segments with appropriate noise energy should be selected. Distributed Acoustic Sensing (DAS) is a novel sensing technique that enables acquisition of these data at very high spatial and temporal resolution for tens of kilometers. One major challenge when utilizing the DAS technology is the large volume of data that is produced, thereby presenting a significant Big Data challenge to find regions of useful energy. In this work, we present a highly scalable and efficient approach to process real, complex DAS data by integrating physics knowledge acquired during a data exploration phase followed by deep supervised learning to identify useful coherent surface waves generated by anthropogenic activity, a class of seismic waves that is abundant on these recordings and is useful for geophysical imaging. Data exploration and training were done on 130~Gigabytes (GB) of DAS measurements. Using parallel computing, we were able to do inference on an additional 170~GB of data (or the equivalent of 10 days worth of recordings) in less than 30 minutes. Our method provides interpretable patterns describing the interaction of ground-based human activities with the buried sensors.
Deep learning methods achieve great success in many areas due to their powerful feature extraction capabilities and end-to-end training mechanism, and recently they are also introduced for radio signal modulation classification. In this paper, we propose a novel deep learning framework called SigNet, where a signal-to-matrix (S2M) operator is adopted to convert the original signal into a square matrix first and is co-trained with a follow-up CNN architecture for classification. This model is further accelerated by integrating 1D convolution operators, leading to the upgraded model SigNet2.0. The experiments on two signal datasets show that both SigNet and SigNet2.0 outperform a number of well-known baselines, achieving the state-of-the-art performance. Notably, they obtain significantly higher accuracy than 1D-ResNet and 2D-CNN (at most increasing 70.5%), while much faster than LSTM (at most saving 88.0% training time). More interestingly, our proposed models behave extremely well in few-shot learning when a small training data set is provided. They can achieve a relatively high accuracy even when 1% training data are kept, while other baseline models may lose their effectiveness much more quickly as the datasets get smaller. Such result suggests that SigNet/SigNet2.0 could be extremely useful in the situations where labeled signal data are difficult to obtain.
Signal processing, communications, and control have traditionally relied on classical statistical modeling techniques. Such model-based methods utilize mathematical formulations that represent the underlying physics, prior information and additional domain knowledge. Simple classical models are useful but sensitive to inaccuracies and may lead to poor performance when real systems display complex or dynamic behavior. On the other hand, purely data-driven approaches that are model-agnostic are becoming increasingly popular as datasets become abundant and the power of modern deep learning pipelines increases. Deep neural networks (DNNs) use generic architectures which learn to operate from data, and demonstrate excellent performance, especially for supervised problems. However, DNNs typically require massive amounts of data and immense computational resources, limiting their applicability for some signal processing scenarios. We are interested in hybrid techniques that combine principled mathematical models with data-driven systems to benefit from the advantages of both approaches. Such model-based deep learning methods exploit both partial domain knowledge, via mathematical structures designed for specific problems, as well as learning from limited data. In this article we survey the leading approaches for studying and designing model-based deep learning systems. We divide hybrid model-based/data-driven systems into categories based on their inference mechanism. We provide a comprehensive review of the leading approaches for combining model-based algorithms with deep learning in a systematic manner, along with concrete guidelines and detailed signal processing oriented examples from recent literature. Our aim is to facilitate the design and study of future systems on the intersection of signal processing and machine learning that incorporate the advantages of both domains.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا