No Arabic abstract
LoRa wireless networks are considered as a key enabling technology for next generation internet of things (IoT) systems. New IoT deployments (e.g., smart city scenarios) can have thousands of devices per square kilometer leading to huge amount of power consumption to provide connectivity. In this paper, we investigate green LoRa wireless networks powered by a hybrid of the grid and renewable energy sources, which can benefit from harvested energy while dealing with the intermittent supply. This paper proposes resource management schemes of the limited number of channels and spreading factors (SFs) with the objective of improving the LoRa gateway energy efficiency. First, the problem of grid power consumption minimization while satisfying the systems quality of service demands is formulated. Specifically, both scenarios the uncorrelated and time-correlated channels are investigated. The optimal resource management problem is solved by decoupling the formulated problem into two sub-problems: channel and SF assignment problem and energy management problem. Since the optimal solution is obtained with high complexity, online resource management heuristic algorithms that minimize the grid energy consumption are proposed. Finally, taking into account the channel and energy correlation, adaptable resource management schemes based on Reinforcement Learning (RL), are developed. Simulations results show that the proposed resource management schemes offer efficient use of renewable energy in LoRa wireless networks.
In this article, we first present the vision, key performance indicators, key enabling techniques (KETs), and services of 6G wireless networks. Then, we highlight a series of general resource management (RM) challenges as well as unique RM challenges corresponding to each KET. The unique RM challenges in 6G necessitate the transformation of existing optimization-based solutions to artificial intelligence/machine learning-empowered solutions. In the sequel, we formulate a joint network selection and subchannel allocation problem for 6G multi-band network that provides both further enhanced mobile broadband (FeMBB) and extreme ultra reliable low latency communication (eURLLC) services to the terrestrial and aerial users. Our solution highlights the efficacy of multi-band network and demonstrates the robustness of dueling deep Q-learning in obtaining efficient RM solution with faster convergence rate compared to deep-Q network and double deep Q-network algorithms.
Network slicing is born as an emerging business to operators, by allowing them to sell the customized slices to various tenants at different prices. In order to provide better-performing and cost-efficient services, network slicing involves challenging technical issues and urgently looks forward to intelligent innovations to make the resource management consistent with users activities per slice. In that regard, deep reinforcement learning (DRL), which focuses on how to interact with the environment by trying alternative actions and reinforcing the tendency actions producing more rewarding consequences, is assumed to be a promising solution. In this paper, after briefly reviewing the fundamental concepts of DRL, we investigate the application of DRL in solving some typical resource management for network slicing scenarios, which include radio resource slicing and priority-based core network slicing, and demonstrate the advantage of DRL over several competing schemes through extensive simulations. Finally, we also discuss the possible challenges to apply DRL in network slicing from a general perspective.
In this paper, we investigate the uplink transmission performance of low-power wide-area (LPWA) networks with regards to coexisting radio modules. We adopt long range (LoRa) radio technique as an example of the network of focus even though our analysis can be easily extended to other situations. We exploit a new topology to model the network, where the node locations of LoRa follow a Poisson cluster process (PCP) while other coexisting radio modules follow a Poisson point process (PPP). Unlike most of the performance analysis based on stochastic geometry, we take noise into consideration. More specifically, two models, with a fixed and a random number of active LoRa nodes in each cluster, respectively, are considered. To obtain insights, both the exact and simple approximated expressions for coverage probability are derived. Based on them, area spectral efficiency and energy efficiency are obtained. From our analysis, we show how the performance of LPWA networks can be enhanced through adjusting the density of LoRa nodes around each LoRa receiver. Moreover, the simulation results unveil that the optimal number of active LoRa nodes in each cluster exists to maximize the area spectral efficiency.
Resource management plays a pivotal role in wireless networks, which, unfortunately, leads to challenging NP-hard problems. Artificial Intelligence (AI), especially deep learning techniques, has recently emerged as a disruptive technology to solve such challenging problems in a real-time manner. However, although promising results have been reported, practical design guidelines and performance guarantees of AI-based approaches are still missing. In this paper, we endeavor to address two fundamental questions: 1) What are the main advantages of AI-based methods compared with classical techniques; and 2) Which neural network should we choose for a given resource management task. For the first question, four advantages are identified and discussed. For the second question, emph{optimality gap}, i.e., the gap to the optimal performance, is proposed as a measure for selecting model architectures, as well as, for enabling a theoretical comparison between different AI-based approaches. Specifically, for $K$-user interference management problem, we theoretically show that graph neural networks (GNNs) are superior to multi-layer perceptrons (MLPs), and the performance gap between these two methods grows with $sqrt{K}$.
An important modulation technique for Internet of Things (IoT) is the one proposed by the LoRa allianceTM. In this paper we analyze the M-ary LoRa modulation in the time and frequency domains. First, we provide the signal description in the time domain, and show that LoRa is a memoryless continuous phase modulation. The cross-correlation between the transmitted waveforms is determined, proving that LoRa can be considered approximately an orthogonal modulation only for large M. Then, we investigate the spectral characteristics of the signal modulated by random data, obtaining a closed-form expression of the spectrum in terms of Fresnel functions. Quite surprisingly, we found that LoRa has both continuous and discrete spectra, with the discrete spectrum containing exactly a fraction 1/M of the total signal power.