No Arabic abstract
We consider a fully-loaded ground wireless network supporting unmanned aerial vehicle (UAV) transmission services. To enable the overload transmissions to a ground user (GU) and a UAV, two transmission schemes are employed, namely non-orthogonal multiple access (NOMA) and relaying, depending on whether or not the GU and UAV are served simultaneously. Under the assumption of the system operating with infinite blocklength (IBL) codes, the IBL throughputs of both the GU and the UAV are derived under the two schemes. More importantly, we also consider the scenario in which data packets are transmitted via finite blocklength (FBL) codes, i.e., data transmission to both the UAV and the GU is performed under low-latency and high reliability constraints. In this setting, the FBL throughputs are characterized again considering the two schemes of NOMA and relaying. Following the IBL and FBL throughput characterizations, optimal resource allocation designs are subsequently proposed to maximize the UAV throughput while guaranteeing the throughput of the cellular user.Moreover, we prove that the relaying scheme is able to provide transmission service to the UAV while improving the GUs performance, and that the relaying scheme potentially offers a higher throughput to the UAV in the FBL regime than in the IBL regime. On the other hand, the NOMA scheme provides a higher UAV throughput (than relaying) by slightly sacrificing the GUs performance.
In multicell massive multiple-input multiple-output (MIMO) non-orthogonal multiple access (NOMA) networks, base stations (BSs) with multiple antennas deliver their radio frequency energy in the downlink, and Internet-of-Things (IoT) devices use their harvested energy to support uplink data transmission. This paper investigates the energy efficiency (EE) problem for multicell massive MIMO NOMA networks with wireless power transfer (WPT). To maximize the EE of the network, we propose a novel joint power, time, antenna selection, and subcarrier resource allocation scheme, which can properly allocate the time for energy harvesting and data transmission. Both perfect and imperfect channel state information (CSI) are considered, and their corresponding EE performance is analyzed. Under quality-of-service (QoS) requirements, an EE maximization problem is formulated, which is non-trivial due to non-convexity. We first adopt nonlinear fraction programming methods to convert the problem to be convex, and then, develop a distributed alternating direction method of multipliers (ADMM)- based approach to solve the problem. Simulation results demonstrate that compared to alternative methods, the proposed algorithm can converge quickly within fewer iterations, and can achieve better EE performance.
This work proposes a new resource allocation optimization and network management framework for wireless networks using neighborhood-based optimization rather than fully centralized or fully decentralized methods. We propose hierarchical clustering with a minimax linkage criterion for the formation of the virtual cells. Once the virtual cells are formed, we consider two cooperation models: the interference coordination model and the coordinated multi-point decoding model. In the first model base stations in a virtual cell decode their signals independently, but allocate the communication resources cooperatively. In the second model base stations in the same virtual cell allocate the communication resources and decode their signals cooperatively. We address the resource allocation problem for each of these cooperation models. For the interference coordination model this problem is an NP-hard mixed-integer optimization problem whereas for the coordinated multi-point decoding model it is convex. Our numerical results indicate that proper design of the neighborhood-based optimization leads to significant gains in sum rate over fully decentralized optimization, yet may also have a significant sum rate penalty compared to fully centralized optimization. In particular, neighborhood-based optimization has a significant sum rate penalty compared to fully centralized optimization in the coordinated multi-point model, but not the interference coordination model.
Resource management plays a pivotal role in wireless networks, which, unfortunately, leads to challenging NP-hard problems. Artificial Intelligence (AI), especially deep learning techniques, has recently emerged as a disruptive technology to solve such challenging problems in a real-time manner. However, although promising results have been reported, practical design guidelines and performance guarantees of AI-based approaches are still missing. In this paper, we endeavor to address two fundamental questions: 1) What are the main advantages of AI-based methods compared with classical techniques; and 2) Which neural network should we choose for a given resource management task. For the first question, four advantages are identified and discussed. For the second question, emph{optimality gap}, i.e., the gap to the optimal performance, is proposed as a measure for selecting model architectures, as well as, for enabling a theoretical comparison between different AI-based approaches. Specifically, for $K$-user interference management problem, we theoretically show that graph neural networks (GNNs) are superior to multi-layer perceptrons (MLPs), and the performance gap between these two methods grows with $sqrt{K}$.
The massive sensing data generated by Internet-of-Things will provide fuel for ubiquitous artificial intelligence (AI), automating the operations of our society ranging from transportation to healthcare. The realistic adoption of this technique however entails labelling of the enormous data prior to the training of AI models via supervised learning. To tackle this challenge, we explore a new perspective of wireless crowd labelling that is capable of downloading data to many imperfect mobile annotators for repetition labelling by exploiting multicasting in wireless networks. In this cross-disciplinary area, the integration of the rate-distortion theory and the principle of repetition labelling for accuracy improvement gives rise to a new tradeoff between radio-and-annotator resources under a constraint on labelling accuracy. Building on the tradeoff and aiming at maximizing the labelling throughput, this work focuses on the joint optimization of encoding rate, annotator clustering, and sub-channel allocation, which results in an NP-hard integer programming problem. To devise an efficient solution approach, we establish an optimal sequential annotator-clustering scheme based on the order of decreasing signal-to-noise ratios. Thereby, the optimal solution can be found by an efficient tree search. Next, the solution is simplified by applying truncated channel inversion. Alternatively, the optimization problem can be recognized as a knapsack problem, which can be efficiently solved in pseudo-polynomial time by means of dynamic programming. In addition, exact polices are derived for the annotators constrained and spectrum constrained cases. Last, simulation results demonstrate the significant throughput gains based on the optimal solution compared with decoupled allocation of the two types of resources.
We integrate a wireless powered communication network with a cooperative cognitive radio network, where multiple secondary users (SUs) powered wirelessly by a hybrid access point (HAP) help a primary user relay the data. As a reward for the cooperation, the secondary network gains the spectrum access where SUs transmit to HAP using time division multiple access. To maximize the sum-throughput of SUs, we present a secondary sum-throughput optimal resource allocation (STORA) scheme. Under the constraint of meeting target primary rate, the STORA scheme chooses the optimal set of relaying SUs and jointly performs the time and energy allocation for SUs. Specifically, by exploiting the structure of the optimal solution, we find the order in which SUs are prioritized to relay primary data. Since the STORA scheme focuses on the sum-throughput, it becomes inconsiderate towards individual SU throughput, resulting in low fairness. To enhance fairness, we investigate three resource allocation schemes, which are (i) equal time allocation, (ii) minimum throughput maximization, and (iii) proportional time allocation. Simulation results reveal the trade-off between sum-throughput and fairness. The minimum throughput maximization scheme is the fairest one as each SU gets the same throughput, but yields the least SU sum-throughput.