No Arabic abstract
Semi-grant-free (SGF) transmission has recently received significant attention due to its capability to accommodate massive connectivity and reduce access delay by admitting grant-free users to channels which would otherwise be solely occupied by grant-based users. In this paper, a new SGF transmission scheme that exploits the flexibility in choosing the decoding order in non-orthogonal multiple access (NOMA) is proposed. Compared to existing SGF schemes, this new scheme can ensure that admitting the grant-free users is completely transparent to the grant-based users, i.e., the grant-based users quality-of-service experience is guaranteed to be the same as for orthogonal multiple access. In addition, compared to existing SGF schemes, the proposed SGF scheme can significantly improve the robustness of the grant-free users transmissions and effectively avoid outage probability error floors. To facilitate the performance evaluation of the proposed SGF transmission scheme, an exact expression for the outage probability is obtained and an asymptotic analysis is conducted to show that the achievable multi-user diversity gain is proportional to the number of participating grant-free users. Computer simulation results demonstrate the performance of the proposed SGF transmission scheme and verify the accuracy of the developed analytical results.
In this paper, we exploit the capability of multi-agent deep reinforcement learning (MA-DRL) technique to generate a transmit power pool (PP) for Internet of things (IoT) networks with semi-grant-free non-orthogonal multiple access (SGF-NOMA). The PP is mapped with each resource block (RB) to achieve distributed transmit power control (DPC). We first formulate the resource (sub-channel and transmit power) selection problem as stochastic Markov game, and then solve it using two competitive MA-DRL algorithms, namely double deep Q network (DDQN) and Dueling DDQN. Each GF user as an agent tries to find out the optimal transmit power level and RB to form the desired PP. With the aid of dueling processes, the learning process can be enhanced by evaluating the valuable state without considering the effect of each action at each state. Therefore, DDQN is designed for communication scenarios with a small-size action-state space, while Dueling DDQN is for a large-size case. Our results show that the proposed MA-Dueling DDQN based SGF-NOMA with DPC outperforms the SGF-NOMA system with the fixed-power-control mechanism and networks with pure GF protocols with 17.5% and 22.2% gain in terms of the system throughput, respectively. Moreover, to decrease the training time, we eliminate invalid actions (high transmit power levels) to reduce the action space. We show that our proposed algorithm is computationally scalable to massive IoT networks. Finally, to control the interference and guarantee the quality-of-service requirements of grant-based users, we find the optimal number of GF users for each sub-channel.
Grant-free non-orthogonal multiple access (GF-NOMA) is a potential technique to support massive Ultra-Reliable and Low-Latency Communication (mURLLC) service. However, the dynamic resource configuration in GF-NOMA systems is challenging due to random traffics and collisions, that are unknown at the base station (BS). Meanwhile, joint consideration of the latency and reliability requirements makes the resource configuration of GF-NOMA for mURLLC more complex. To address this problem, we develop a general learning framework for signature-based GF-NOMA in mURLLC service taking into account the multiple access signature collision, the UE detection, as well as the data decoding procedures for the K-repetition GF and the Proactive GF schemes. The goal of our learning framework is to maximize the long-term average number of successfully served users (UEs) under the latency constraint. We first perform a real-time repetition value configuration based on a double deep Q-Network (DDQN) and then propose a Cooperative Multi-Agent learning technique based on the DQN (CMA-DQN) to optimize the configuration of both the repetition values and the contention-transmission unit (CTU) numbers. Our results show that the number of successfully served UEs under the same latency constraint in our proposed learning framework is up to ten times for the K-repetition scheme, and two times for the Proactive scheme, more than that with fixed repetition values and CTU numbers. In addition, the superior performance of CMA-DQN over the conventional load estimation-based approach (LE-URC) demonstrates its capability in dynamically configuring in long term. Importantly, our general learning framework can be used to optimize the resource configuration problems in all the signature-based GF-NOMA schemes.
Non-Orthogonal Multiple Access (NOMA) and caching are two proposed approaches to increase the capacity of future 5G wireless systems. Typically in NOMA systems, signals at the receiver are decoded using successive interference cancellation in order to achieve capacity in multi-user systems. The leveraging of caching in the physical layer to further improve on the benefits of NOMA is investigated, which is termed cache-aided NOMA. Specific attention is given to the caching cases where the users with weaker channel conditions possess a cache of the information requested by a user with a stronger channel condition. The probability that any of the users is in outage for any of the rates required for this NOMA system, defined as the union-outage, is derived for the case of fixed-power allocation, and the power allocation strategy that minimizes the union-outage probability is derived. Simulation results confirm the analytical results, which demonstrate the benefits of cache-aided NOMA on reducing the union-outages probability.
The fundamental power allocation requirements for NOMA systems with minimum quality of service (QoS) requirements are investigated. For any minimum QoS rate $R_0$, the limits on the power allocation coefficients for each user are derived, such that any power allocation coefficient outside of these limits creates an outage with probability equal to 1. The power allocation coefficients that facilitate each users success of performing successive interference cancellation (SIC) and decoding its own signal are derived, and are found to depend only on the target rate $R_0$ and the number of total users $K$. It is then proven that using these power allocation coefficients create the same outage event as if using orthogonal multiple access (OMA), which proves that the outage performance of NOMA with a fixed-power scheme can matched that of OMA for all users simultaneously. Simulations confirm the theoretical results, and also demonstrate that a power allocation strategy exists that can improve the outage performance of NOMA over OMA, even with a fixed-power strategy.
The next generation Internet of Things (IoT) exhibits a unique feature that IoT devices have different energy profiles and quality of service (QoS) requirements. In this paper, two energy and spectrally efficient transmission strategies, namely wireless power transfer assisted non-orthogonal multiple access (WPT-NOMA) and backscatter communication assisted NOMA (BAC-NOMA), are proposed by utilizing this feature of IoT and employing spectrum and energy cooperation among the devices. Furthermore, for the proposed WPT-NOMA scheme, the application of hybrid successive interference cancelation (SIC) is also considered, and analytical results are developed to demonstrate that WPT-NOMA can avoid outage probability error floors and realize the full diversity gain. Unlike WPT-NOMA, BAC-NOMA suffers from an outage probability error floor, and the asymptotic behaviour of this error floor is analyzed in the paper by applying the extreme value theory. In addition, the effect of a unique feature of BAC-NOMA, i.e., employing one devices signal as the carrier signal for another device, is studied, and its impact on the diversity gain is revealed. Simulation results are also provided to compare the performance of the proposed strategies and verify the developed analytical results.