No Arabic abstract
Ambient backscatter communications is an emerging paradigm and a key enabler for pervasive connectivity of low-powered wireless devices. It is primarily beneficial in the Internet of things (IoT) and the situations where computing and connectivity capabilities expand to sensors and miniature devices that exchange data on a low power budget. The premise of the ambient backscatter communication is to build a network of devices capable of operating in a battery-free manner by means of smart networking, radio frequency (RF) energy harvesting and power management at the granularity of individual bits and instructions. Due to this innovation in communication methods, it is essential to investigate the performance of these devices under practical constraints. To do so, this article formulates a model for wireless-powered ambient backscatter devices and derives a closed-form expression of outage probability under Rayleigh fading. Based on this expression, the article provides the power-splitting factor that balances the tradeoff between energy harvesting and achievable data rate. Our results also shed light on the complex interplay of a power-splitting factor, amount of harvested energy, and the achievable data rates.
Existing tag signal detection algorithms inevitably suffer from a high bit error rate (BER) due to the difficulties in estimating the channel state information (CSI). To eliminate the requirement of channel estimation and to improve the system performance, in this paper, we adopt a deep transfer learning (DTL) approach to implicitly extract the features of communication channel and directly recover tag symbols. Inspired by the powerful capability of convolutional neural networks (CNN) in exploring the features of data in a matrix form, we design a novel covariance matrix aware neural network (CMNet)-based detection scheme to facilitate DTL for tag signal detection, which consists of offline learning, transfer learning, and online detection. Specifically, a CMNet-based likelihood ratio test (CMNet-LRT) is derived based on the minimum error probability (MEP) criterion. Taking advantage of the outstanding performance of DTL in transferring knowledge with only a few training data, the proposed scheme can adaptively fine-tune the detector for different channel environments to further improve the detection performance. Finally, extensive simulation results demonstrate that the BER performance of the proposed method is comparable to that of the optimal detection method with perfect CSI.
Ambient backscatter communications (AmBackComs) have been recognized as a spectrum- and energy-efficient technology for Internet of Things, as it allows passive backscatter devices (BDs) to modulate their information into the legacy signals, e.g., cellular signals, and reflect them to their associated receivers while harvesting energy from the legacy signals to power their circuit operation. {color{black} However, the co-channel interference between the backscatter link and the legacy link and the non-linear behavior of energy harvesters at the BDs have largely been ignored in the performance analysis of AmBackComs. Taking these two aspects, this paper provides a comprehensive outage performance analysis for an AmBackCom system with multiple backscatter links}, where one of the backscatter links is opportunistically selected to leverage the legacy signals transmitted in a given resource block. For any selected backscatter link, we propose an adaptive reflection coefficient (RC), which is adapted to the non-linear energy harvesting (EH) model and the location of the selected backscatter link, to minimize the outage probability of the backscatter link. In order to study the impact of co-channel interference on both backscatter and legacy links, for a selected backscatter link, we derive the outage probabilities for the legacy link and the backscatter link. Furthermore, we study the best and worst outage performances for the backscatter system where the selected backscatter link maximizes or minimizes the signal-to-interference-plus noise ratio (SINR) at the backscatter receiver. We also study the best and worst outage performances for the legacy link where the selected backscatter link results in the lowest and highest co-channel interference to the legacy receiver, respectively.
In this paper, we consider the design of a multiple-input multiple-output (MIMO) transmitter which simultaneously functions as a MIMO radar and a base station for downlink multiuser communications. In addition to a power constraint, we require the covariance of the transmit waveform be equal to a given optimal covariance for MIMO radar, to guarantee the radar performance. With this constraint, we formulate and solve the signal-to-interference-plus-noise ratio (SINR) balancing problem for multiuser transmit beamforming via convex optimization. Considering that the interference cannot be completely eliminated with this constraint, we introduce dirty paper coding (DPC) to further cancel the interference, and formulate the SINR balancing and sum rate maximization problem in the DPC regime. Although both of the two problems are non-convex, we show that they can be reformulated to convex optimizations via the Lagrange and downlink-uplink duality. In addition, we propose gradient projection based algorithms to solve the equivalent dual problem of SINR balancing, in both transmit beamforming and DPC regimes. The simulation results demonstrate significant performance improvement of DPC over transmit beamforming, and also indicate that the degrees of freedom for the communication transmitter is restricted by the rank of the covariance.
We consider an ambient backscatter communication (AmBC) system aided by an intelligent reflecting surface (IRS). The optimization of the IRS to assist AmBC is extremely difficult when there is no prior channel knowledge, for which no design solutions are currently available. We utilize a deep reinforcement learning-based framework to jointly optimize the IRS and reader beamforming, with no knowledge of the channels or ambient signal. We show that the proposed framework can facilitate effective AmBC communication with a detection performance comparable to several benchmarks under full channel knowledge.
An unmanned aircraft system (UAS) consists of an unmanned aerial vehicle (UAV) and its controller which use radios to communicate. While the remote controller (RC) is traditionally operated by a person who is maintaining visual line of sight with the UAV it controls, the trend is moving towards long-range control and autonomous operation. To enable this, reliable and widely available wireless connectivity is needed because it is the only way to manually control a UAV or take control of an autonomous UAV flight. This article surveys the ongoing Third Generation Partnership Project (3GPP) standardization activities for enabling networked UASs. In particular, we present the requirements, envisaged architecture and services to be offered to/by UAVs and RCs, which will communicate with one another, with the UAS Traffic Management (UTM), and with other users through cellular networks. Critical research directions relate to security and spectrum coexistence, among others. We identify major R&D platforms that will drive the standardization of cellular communications networks and applications.