No Arabic abstract
Machine learning (ML) methods are ubiquitous in wireless communication systems and have proven powerful for applications including radio-frequency (RF) fingerprinting, automatic modulation classification, and cognitive radio. However, the large size of ML models can make them difficult to implement on edge devices for latency-sensitive downstream tasks. In wireless communication systems, ML data processing at a sub-millisecond scale will enable real-time network monitoring to improve security and prevent infiltration. In addition, compact and integratable hardware platforms which can implement ML models at the chip scale will find much broader application to wireless communication networks. Toward real-time wireless signal classification at the edge, we propose a novel compact deep network that consists of a photonic-hardware-inspired recurrent neural network model in combination with a simplified convolutional classifier, and we demonstrate its application to the identification of RF emitters by their random transmissions. With the proposed model, we achieve 96.32% classification accuracy over a set of 30 identical ZigBee devices when using 50 times fewer training parameters than an existing state-of-the-art CNN classifier. Thanks to the large reduction in network size, we demonstrate real-time RF fingerprinting with 0.219 ms latency using a small-scale FPGA board, the PYNQ-Z1.
Thanks to its capability of classifying complex phenomena without explicit modeling, deep learning (DL) has been demonstrated to be a key enabler of Wireless Signal Classification (WSC). Although DL can achieve a very high accuracy under certain conditions, recent research has unveiled that the wireless channel can disrupt the features learned by the DL model during training, thus drastically reducing the classification performance in real-world live settings. Since retraining classifiers is cumbersome after deployment, existing work has leveraged the usage of carefully-tailored Finite Impulse Response (FIR) filters that, when applied at the transmitters side, can restore the features that are lost because of the the channel actions, i.e., waveform synthesis. However, these approaches compute FIRs using offline optimization strategies, which limits their efficacy in highly-dynamic channel settings. In this paper, we improve the state of the art by proposing Chares, a Deep Reinforcement Learning (DRL)-based framework for channel-resilient adaptive waveform synthesis. Chares adapts to new and unseen channel conditions by optimally computing through DRL the FIRs in real-time. Chares is a DRL agent whose architecture is-based upon the Twin Delayed Deep Deterministic Policy Gradients (TD3), which requires minimal feedback from the receiver and explores a continuous action space. Chares has been extensively evaluated on two well-known datasets. We have also evaluated the real-time latency of Chares with an implementation on field-programmable gate array (FPGA). Results show that Chares increases the accuracy up to 4.1x when no waveform synthesis is performed, by 1.9x with respect to existing work, and can compute new actions within 41us.
Mobile edge learning is an emerging technique that enables distributed edge devices to collaborate in training shared machine learning models by exploiting their local data samples and communication and computation resources. To deal with the straggler dilemma issue faced in this technique, this paper proposes a new device to device enabled data sharing approach, in which different edge devices share their data samples among each other over communication links, in order to properly adjust their computation loads for increasing the training speed. Under this setup, we optimize the radio resource allocation for both data sharing and distributed training, with the objective of minimizing the total training delay under fixed numbers of local and global iterations. Numerical results show that the proposed data sharing design significantly reduces the training delay, and also enhances the training accuracy when the data samples are non independent and identically distributed among edge devices.
A real-time ranging lidar with 0.1 Mega Hertz update rate and few-micrometer resolution incorporating dispersive Fourier transformation and instantaneous microwave frequency measurement is proposed and demonstrated. As time-stretched femtosecond laser pulse passing through an all-fiber Mach-Zehnder Interferometer, where the detection light beam is inserted into the optical path of one arm, the displacement is encoded to the frequency variation of the temporal interferogram. To deal with the challenges in storage and real-time processing of the microwave pulse generated on a photodetector, we turn to all-optical signal processing. A carrier wave is modulated by the time-domain interferogram using an intensity modulator. After that, the frequency variation of the microwave pulse is uploaded to the first order sidebands. Finally, the frequency shift of the sidebands is turned into transmission change through a symmetric-locked frequency discriminator. In experiment, A real-time ranging system with adjustable dynamic range and detection sensitivity is realized by incorporating a programmable optical filter. Standard deviation of 7.64 {mu}m, overall mean error of 19.10 {mu}m over 15 mm detection range and standard deviation of 37.73 {mu}m, overall mean error of 36.63 {mu}m over 45 mm detection range are obtained respectively.
We designed and implemented a deep learning based RF signal classifier on the Field Programmable Gate Array (FPGA) of an embedded software-defined radio platform, DeepRadio, that classifies the signals received through the RF front end to different modulation types in real time and with low power. This classifier implementation successfully captures complex characteristics of wireless signals to serve critical applications in wireless security and communications systems such as identifying spoofing signals in signal authentication systems, detecting target emitters and jammers in electronic warfare (EW) applications, discriminating primary and secondary users in cognitive radio networks, interference hunting, and adaptive modulation. Empowered by low-power and low-latency embedded computing, the deep neural network runs directly on the FPGA fabric of DeepRadio, while maintaining classifier accuracy close to the software performance. We evaluated the performance when another SDR (USRP) transmits signals with different modulation types at different power levels and DeepRadio receives the signals and classifies them in real time on its FPGA. A smartphone with a mobile app is connected to DeepRadio to initiate the experiment and visualize the classification results. With real radio transmissions over the air, we show that the classifier implemented on DeepRadio achieves high accuracy with low latency (microsecond per sample) and low energy consumption (microJoule per sample), and this performance is not matched by other embedded platforms such as embedded graphics processing unit (GPU).
We study the image retrieval problem at the wireless edge, where an edge device captures an image, which is then used to retrieve similar images from an edge server. These can be images of the same person or a vehicle taken from other cameras at different times and locations. Our goal is to maximize the accuracy of the retrieval task under power and bandwidth constraints over the wireless link. Due to the stringent delay constraint of the underlying application, sending the whole image at a sufficient quality is not possible. We propose two alternative schemes based on digital and analog communications, respectively. In the digital approach, we first propose a deep neural network (DNN) aided retrieval-oriented image compression scheme, whose output bit sequence is transmitted over the channel using conventional channel codes. In the analog joint source and channel coding (JSCC) approach, the feature vectors are directly mapped into channel symbols. We evaluate both schemes on image based re-identification (re-ID) tasks under different channel conditions, including both static and fading channels. We show that the JSCC scheme significantly increases the end-to-end accuracy, speeds up the encoding process, and provides graceful degradation with channel conditions. The proposed architecture is evaluated through extensive simulations on different datasets and channel conditions, as well as through ablation studies.