No Arabic abstract
Ultra-Reliable Low-Latency Communications (URLLC) is a novel feature of 5G cellular systems. To satisfy strict URLLC requirements for uplink data transmission, the specifications of 5G systems introduce the grant-free channel access method. According to this method, a User Equipment (UE) performs packet transmission without requesting channel resources from a base station (gNB). With the grant-free channel access, the gNB configures the uplink transmission parameters in a long-term time scale. Since the channel quality can significantly change in time and frequency domains, the gNB should select robust transmission parameters to satisfy the URLLC requirements. Many existing studies consider fixed robust uplink transmission parameter selection that allows satisfying the requirements even for UEs with poor channel conditions. However, the more robust transmission parameters are selected, the lower is the network capacity. In this paper, we propose an adaptive algorithm that selects the transmission parameters depending on the channel quality based on the signal-to-noise ratio statistics analysis at the gNB. Simulation results obtained with NS-3 show that the algorithm allows meeting the URLLC latency and reliability requirements while reducing the channel resource consumption more than twice in comparison with the fixed transmission parameters selection.
The newly introduced ultra-reliable low latency communication service class in 5G New Radio depends on innovative low latency radio resource management solutions that can guarantee high reliability. Grant-free random access, where channel resources are accessed without undergoing assignment through a handshake process, is proposed in 5G New Radio as an important latency reducing solution. However, this comes at an increased likelihood of collisions resulting from uncontrolled channel access, when the same resources are preallocated to a group of users. Novel reliability enhancement techniques are therefore needed. This article provides an overview of grant-free random access in 5G New Radio focusing on the ultra-reliable low latency communication service class, and presents two reliability-enhancing solutions. The first proposes retransmissions over shared resources, whereas the second proposal incorporates grant-free transmission with non-orthogonal multiple access with overlapping transmissions being resolved through the use of advanced receivers. Both proposed solutions result in significant performance gains, in terms of reliability as well as resource efficiency. For example, the proposed non-orthogonal multiple access scheme can support a normalized load of more than 1.5 users/slot at packet loss rates of ~10^{-5} - a significant improvement over the maximum supported load with conventional grant-free schemes like slotted-ALOHA.
This paper presents a novel framework for traffic prediction of IoT devices activated by binary Markovian events. First, we consider a massive set of IoT devices whose activation events are modeled by an On-Off Markov process with known transition probabilities. Next, we exploit the temporal correlation of the traffic events and apply the forward algorithm in the context of hidden Markov models (HMM) in order to predict the activation likelihood of each IoT device. Finally, we apply the fast uplink grant scheme in order to allocate resources to the IoT devices that have the maximal likelihood for transmission. In order to evaluate the performance of the proposed scheme, we define the regret metric as the number of missed resource allocation opportunities. The proposed fast uplink scheme based on traffic prediction outperforms both conventional random access and time division duplex in terms of regret and efficiency of system usage, while it maintains its superiority over random access in terms of average age of information for massive deployments.
Large-scale antenna arrays employed by the base station (BS) constitute an essential next-generation communications technique. However, due to the constraints of size, cost, and power consumption, it is usually considered unrealistic to use a large-scale antenna array at the user side. Inspired by the emerging technique of reconfigurable intelligent surfaces (RIS), we firstly propose the concept of user-side RIS (US-RIS) for facilitating the employment of a large-scale antenna array at the user side in a cost- and energy-efficient way. In contrast to the existing employments of RIS, which belong to the family of base-station-side RISs (BSS-RISs), the US-RIS concept by definition facilitates the employment of RIS at the user side for the first time. This is achieved by conceiving a multi-layer structure to realize a compact form-factor. Furthermore, our theoretical results demonstrate that, in contrast to the existing single-layer structure, where only the phase of the signal reflected from RIS can be adjusted, the amplitude of the signal penetrating multi-layer US-RIS can also be partially controlled, which brings about a new degree of freedom (DoF) for beamformer design that can be beneficially exploited for performance enhancement. In addition, based on the proposed multi-layer US-RIS, we formulate the signal-to-noise ratio (SNR) maximization problem of US-RIS-aided communications. Due to the non-convexity of the problem introduced by this multi-layer structure, we propose a multi-layer transmit beamformer design relying on an iterative algorithm for finding the optimal solution by alternately updating each variable. Finally, our simulation results verify the superiority of the proposed multi-layer US-RIS as a compact realization of a large-scale antenna array at the user side for uplink transmission.
Future wireless networks are envisioned to serve massive Internet of things (mIoT) via some radio access technologies, where the random access channel (RACH) procedure should be exploited for IoT devices to access the networks. However, the theoretical analysis of the RACH procedure for massive IoT devices is challenging. To address this challenge, we first correlate the RACH request of an IoT device with the status of its maintained queue and analyze the evolution of the queue status. Based on the analysis result, we then derive the closed-form expression of the random access (RA) success probability, which is a significant indicator characterizing the RACH procedure of the device. Besides, considering the agreement on converging different services onto a shared infrastructure, we investigate the RAN slicing for mIoT and bursty ultra-reliable and low latency communications (URLLC) service multiplexing. Specifically, we formulate the RAN slicing problem as an optimization one to maximize the total RA success probabilities of all IoT devices and provide URLLC services for URLLC devices in an energy-efficient way. A slice resource optimization (SRO) algorithm exploiting relaxation and approximation with provable tightness and error bound is then proposed to mitigate the optimization problem. Simulation results demonstrate that the proposed SRO algorithm can effectively implement the service multiplexing of mIoT and bursty URLLC traffic.
Massive machine-type communication (mMTC) and ultra-reliable and low-latency communication (URLLC) are two key service types in the fifth-generation (5G) communication systems, pursuing scalability and reliability with low-latency, respectively. These two extreme services are envisaged to agglomerate together into emph{critical mMTC} shortly with emerging use cases (e.g., wide-area disaster monitoring, wireless factory automation), creating new challenges to designing wireless systems beyond 5G. While conventional network slicing is effective in supporting a simple mixture of mMTC and URLLC, it is difficult to simultaneously guarantee the reliability, latency, and scalability requirements of critical mMTC (e.g., < 4ms latency, $10^6$ devices/km$^2$ for factory automation) with limited radio resources. Furthermore, recently proposed solutions to scalable URLLC (e.g., machine learning aided URLLC for driverless vehicles) are ill-suited to critical mMTC whose machine type users have minimal energy budget and computing capability that should be (tightly) optimized for given tasks. To this end, our paper aims to characterize promising use cases of critical mMTC and search for their possible solutions. To this end, we first review the state-of-the-art (SOTA) technologies for separate mMTC and URLLC services and then identify key challenges from conflicting SOTA requirements, followed by potential approaches to prospective critical mMTC solutions at different layers.