Do you want to publish a course? Click here

Delay Constrained Buffer-Aided Relay Selection in the Internet of Things with Decision-Assisted Reinforcement Learning

88   0   0.0 ( 0 )
 Added by Gaojie Chen
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

This paper investigates the reinforcement learning for the relay selection in the delay-constrained buffer-aided networks. The buffer-aided relay selection significantly improves the outage performance but often at the price of higher latency. On the other hand, modern communication systems such as the Internet of Things often have strict requirement on the latency. It is thus necessary to find relay selection policies to achieve good throughput performance in the buffer-aided relay network while stratifying the delay constraint. With the buffers employed at the relays and delay constraints imposed on the data transmission, obtaining the best relay selection becomes a complicated high-dimensional problem, making it hard for the reinforcement learning to converge. In this paper, we propose the novel decision-assisted deep reinforcement learning to improve the convergence. This is achieved by exploring the a-priori information from the buffer-aided relay system. The proposed approaches can achieve high throughput subject to delay constraints. Extensive simulation results are provided to verify the proposed algorithms.



rate research

Read More

A secure downlink transmission system which is exposed to multiple eavesdroppers and is appropriate for Internet of Things (IoT) applications is considered. A worst case scenario is assumed, in the sense that, in order to enhance their interception ability all eavesdroppers are located close to each other, near the controller and collude to form joint receive beamforming. For such a system, a novel cooperative non-orthogonal multiple access (NOMA) secure transmission scheme for which an IoT device with a stronger channel condition acts as an energy harvesting relay in order to assist a second IoT device operating under weaker channel conditions, is proposed and its performance is analyzed and evaluated. A secrecy sum rate (SSR) maximization problem is formulated and solved under three constraints: i) Transmit power; ii) Successive interference cancellation; iii) Quality of Service. By considering both passive and active eavesdroppers scenarios, two optimization schemes are proposed to improve the overall system SSR. On the one hand, for the passive eavesdropper scenario, an artificial noise-aided secure beamforming scheme is proposed. Since this optimization problem is nonconvex, instead of using traditional but highly complex, bruteforce two-dimensional search, it is conveniently transformed into a convex one by using an epigraph reformulation. On the other hand, for the active multi-antennas eavesdroppers scenario, the orthogonal-projection-based beamforming scheme is considered, and by employing the successive convex approximation method, a suboptimal solution is proposed. Furthermore, since for single antenna transmission the orthogonal-projection-based scheme may not be applicable a simple power control scheme is proposed.
We propose a decision triggered data transmission and collection (DTDTC) protocol for condition monitoring and anomaly detection in the industrial Internet of things (IIoT). In the IIoT, the collection, processing, encoding, and transmission of the sensor readings are usually not for the reconstruction of the original data but for decision making at the fusion center. By moving the decision making process to the local end devices, the amount of data transmission can be significantly reduced, especially when normal signals with positive decisions dominate in the whole life cycle and the fusion center is only interested in collecting the abnormal data. The proposed concept combines compressive sensing, machine learning, data transmission, and joint decision making. The sensor readings are encoded and transmitted to the fusion center only when abnormal signals with negative decisions are detected. All the abnormal signals from the end devices are gathered at the fusion center for a joint decision with feedback messages forwarded to the local actuators. The advantage of such an approach lies in that it can significantly reduce the volume of data to be transmitted through wireless links. Moreover, the introduction of compressive sensing can further reduce the dimension of data tremendously. An exemplary case, i.e., diesel engine condition monitoring, is provided to validate the effectiveness and efficiency of the proposed scheme compared to the conventional ones.
We study a cooperative network with a buffer-aided multi-antenna source, multiple half-duplex (HD) buffer-aided relays and a single destination. Such a setup could represent a cellular downlink scenario, in which the source can be a more powerful wireless device with a buffer and multiple antennas, while a set of intermediate less powerful devices are used as relays to reach the destination. The main target is to recover the multiplexing loss of the network by having the source and a relay to simultaneously transmit their information to another relay and the destination, respectively. Successive transmissions in such a cooperative network, however, cause inter-relay interference (IRI). First, by assuming global channel state information (CSI), we show that the detrimental effect of IRI can be alleviated by precoding at the source, mitigating or even fully cancelling the interference. A cooperative relaying policy is proposed that employs a joint precoding design and relay-pair selection. Note that both fixed rate and adaptive rate transmissions can be considered. For the case when channel state information is only available at the receiver side (CSIR), we propose a relay selection policy that employs a phase alignment technique to reduce the IRI. The performance of the two proposed relay pair selection policies are evaluated and compared with other state-of-the-art relaying schemes in terms of outage and throughput. The results show that the use of a powerful source can provide considerable performance improvements.
The Internet of Things (IoT) is penetrating many facets of our daily life with the proliferation of intelligent services and applications empowered by artificial intelligence (AI). Traditionally, AI techniques require centralized data collection and processing that may not be feasible in realistic application scenarios due to the high scalability of modern IoT networks and growing data privacy concerns. Federated Learning (FL) has emerged as a distributed collaborative AI approach that can enable many intelligent IoT applications, by allowing for AI training at distributed IoT devices without the need for data sharing. In this article, we provide a comprehensive survey of the emerging applications of FL in IoT networks, beginning from an introduction to the recent advances in FL and IoT to a discussion of their integration. Particularly, we explore and analyze the potential of FL for enabling a wide range of IoT services, including IoT data sharing, data offloading and caching, attack detection, localization, mobile crowdsensing, and IoT privacy and security. We then provide an extensive survey of the use of FL in various key IoT applications such as smart healthcare, smart transportation, Unmanned Aerial Vehicles (UAVs), smart cities, and smart industry. The important lessons learned from this review of the FL-IoT services and applications are also highlighted. We complete this survey by highlighting the current challenges and possible directions for future research in this booming area.
We derive closed-form expressions for the achievable rates of a buffer-aided full-duplex (FD) multiple-input multiple-output (MIMO) Gaussian relay channel. The FD relay still suffers from residual self-interference (RSI) after the application of self-interference mitigation techniques. We investigate both cases of a slow-RSI channel where the RSI is fixed over the entire codeword, and a fast-RSI channel where the RSI changes from one symbol duration to another within the codeword. We show that the RSI can be completely eliminated in the slow-RSI case when the FD relay is equipped with a buffer while the fast RSI cannot be eliminated. For the fixed-rate data transmission scenario, we derive the optimal transmission strategy that should be adopted by the source node and relay node to maximize the system throughput. We verify our analytical findings through simulations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا