Do you want to publish a course? Click here

Reinforcement Learning for Interference Avoidance Game in RF-Powered Backscatter Communications

324   0   0.0 ( 0 )
 Added by Ali Rahmati
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

RF-powered backscatter communication is a promising new technology that can be deployed for battery-free applications such as internet of things (IoT) and wireless sensor networks (WSN). However, since this kind of communication is based on the ambient RF signals and battery-free devices, they are vulnerable to interference and jamming. In this paper, we model the interaction between the user and a smart interferer in an ambient backscatter communication network as a game. We design the utility functions of both the user and interferer in which the backscattering time is taken into the account. The convexity of both sub-game optimization problems is proved and the closed-form expression for the equilibrium of the Stackelberg game is obtained. Due to lack of information about the system SNR and transmission strategy of the interferer, the optimal strategy is obtained using the Q-learning algorithm in a dynamic iterative manner. We further introduce hotbooting Q-learning as an effective approach to expedite the convergence of the traditional Q-learning. Simulation results show that our approach can obtain considerable performance improvement in comparison to random and fixed backscattering time transmission strategies and improves the convergence speed of Q-Learning by about 31%.



rate research

Read More

For an RF-powered cognitive radio network with ambient backscattering capability, while the primary channel is busy, the RF-powered secondary user (RSU) can either backscatter the primary signal to transmit its own data or harvest energy from the primary signal (and store in its battery). The harvested energy then can be used to transmit data when the primary channel becomes idle. To maximize the throughput for the secondary system, it is critical for the RSU to decide when to backscatter and when to harvest energy. This optimal decision has to account for the dynamics of the primary channel, energy storage capability, and data to be sent. To tackle that problem, we propose a Markov decision process (MDP)-based framework to optimize RSUs decisions based on its current states, e.g., energy, data as well as the primary channel state. As the state information may not be readily available at the RSU, we then design a low-complexity online reinforcement learning algorithm that guides the RSU to find the optimal solution without requiring prior- and complete-information from the environment. The extensive simulation results then clearly show that the proposed solution achieves higher throughputs, i.e., up to 50%, than that of conventional methods.
Existing tag signal detection algorithms inevitably suffer from a high bit error rate (BER) due to the difficulties in estimating the channel state information (CSI). To eliminate the requirement of channel estimation and to improve the system performance, in this paper, we adopt a deep transfer learning (DTL) approach to implicitly extract the features of communication channel and directly recover tag symbols. Inspired by the powerful capability of convolutional neural networks (CNN) in exploring the features of data in a matrix form, we design a novel covariance matrix aware neural network (CMNet)-based detection scheme to facilitate DTL for tag signal detection, which consists of offline learning, transfer learning, and online detection. Specifically, a CMNet-based likelihood ratio test (CMNet-LRT) is derived based on the minimum error probability (MEP) criterion. Taking advantage of the outstanding performance of DTL in transferring knowledge with only a few training data, the proposed scheme can adaptively fine-tune the detector for different channel environments to further improve the detection performance. Finally, extensive simulation results demonstrate that the BER performance of the proposed method is comparable to that of the optimal detection method with perfect CSI.
Ambient backscatter has been introduced with a wide range of applications for low power wireless communications. In this article, we propose an optimal and low-complexity dynamic spectrum access framework for RF-powered ambient backscatter system. In this system, the secondary transmitter not only harvests energy from ambient signals (from incumbent users), but also backscatters these signals to its receiver for data transmission. Under the dynamics of the ambient signals, we first adopt the Markov decision process (MDP) framework to obtain the optimal policy for the secondary transmitter, aiming to maximize the system throughput. However, the MDP-based optimization requires complete knowledge of environment parameters, e.g., the probability of a channel to be idle and the probability of a successful packet transmission, that may not be practical to obtain. To cope with such incomplete knowledge of the environment, we develop a low-complexity online reinforcement learning algorithm that allows the secondary transmitter to learn from its decisions and then attain the optimal policy. Simulation results show that the proposed learning algorithm not only efficiently deals with the dynamics of the environment, but also improves the average throughput up to 50% and reduces the blocking probability and delay up to 80% compared with conventional methods.
179 - Xiaolun Jia , Xiangyun Zhou 2021
We consider an ambient backscatter communication (AmBC) system aided by an intelligent reflecting surface (IRS). The optimization of the IRS to assist AmBC is extremely difficult when there is no prior channel knowledge, for which no design solutions are currently available. We utilize a deep reinforcement learning-based framework to jointly optimize the IRS and reader beamforming, with no knowledge of the channels or ambient signal. We show that the proposed framework can facilitate effective AmBC communication with a detection performance comparable to several benchmarks under full channel knowledge.
Machine learning (ML) provides effective means to learn from spectrum data and solve complex tasks involved in wireless communications. Supported by recent advances in computational resources and algorithmic designs, deep learning (DL) has found success in performing various wireless communication tasks such as signal recognition, spectrum sensing and waveform design. However, ML in general and DL in particular have been found vulnerable to manipulations thus giving rise to a field of study called adversarial machine learning (AML). Although AML has been extensively studied in other data domains such as computer vision and natural language processing, research for AML in the wireless communications domain is still in its early stage. This paper presents a comprehensive review of the latest research efforts focused on AML in wireless communications while accounting for the unique characteristics of wireless systems. First, the background of AML attacks on deep neural networks is discussed and a taxonomy of AML attack types is provided. Various methods of generating adversarial examples and attack mechanisms are also described. In addition, an holistic survey of existing research on AML attacks for various wireless communication problems as well as the corresponding defense mechanisms in the wireless domain are presented. Finally, as new attacks and defense techniques are developed, recent research trends and the overarching future outlook for AML for next-generation wireless communications are discussed.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا