No Arabic abstract
Renewable sources are taking center stage in electricity generation. Due to the intermittent nature of these renewable resources, the problem of the demand-supply gap arises. To solve this problem, several techniques have been proposed in the literature in terms of cost (adding peaker plants), availability of data (Demand Side Management DSM), hardware infrastructure (appliance controlling DSM) and safety (voltage reduction). However, these solutions are not fair in terms of electricity distribution. In many cases, although the available supply may not match the demand in peak hours, however, the total aggregated demand remains less than the total supply for the whole day. Load shedding (complete blackout) is a commonly used solution to deal with the demand-supply gap, which can cause substantial economic losses. To solve the demand-supply gap problem, we propose a solution called Soft Load Shedding (SLS), which assigns electricity quota to each household in a fair way. We measure the fairness of SLS by defining a function for household satisfaction level. We model the household utilities by parametric function and formulate the problem of SLS as a social welfare problem. We also consider revenue generated from the fair allocation as a performance measure. To evaluate our approach, extensive experiments have been performed on both synthetic and real-world datasets, and our model is compared with several baselines to show its effectiveness in terms of fair allocation and revenue generation.
In this paper, we explore perpetual, scalable, Low-powered Wide-area networks (LPWA). Specifically we focus on the uplink transmissions of non-orthogonal multiple access (NOMA)-based LPWA networks consisting of multiple self-powered nodes and a NOMA-based single gateway. The self-powered LPWA nodes use the harvest-then-transmit protocol where they harvest energy from ambient sources (solar and radio frequency signals), then transmit their signals. The main features of the studied LPWA network are different transmission times-on-air, multiple uplink transmission attempts, and duty cycle restrictions. The aim of this work is to maximize the time-averaged sum of the uplink transmission rates by optimizing the transmission time-on-air allocation, the energy harvesting time allocation and the power allocation; subject to a maximum transmit power and to the availability of the harvested energy. We propose a low complex solution which decouples the optimization problem into three sub-problems: we assign the LPWA node transmission times (using either the fair or unfair approaches), we optimize the energy harvesting (EH) times using a one-dimensional search method, and optimize the transmit powers using a concave-convex (CCCP) procedure. In the simulation results, we focus on Long Range (LoRa) networks as a practical example LPWA network. We validate our proposed solution and we observe a $15%$ performance improvement when using NOMA.
A Load Balancing Relay Algorithm (LBRA) was proposed to solve the unfair spectrum resource allocation in the traditional mobile MTC relay. In order to obtain reasonable use of spectrum resources, and a balanced MTC devices (MTCDs) distribution, spectrum resources are dynamically allocated by MTCDs regrouped on the MTCD to MTC gateway link. Moreover, the system outage probability and transmission capacity are derived when using LBRA. The numerical results show that the proposed algorithm has better performance in transmission capacity and outage probability than the traditional method. LBRA had an increase in transmission capacity of about 0.7dB, and an improvement in outage probability of about 0.8dB with a high MTCD density.
The $alpha$-fair resource allocation problem has received remarkable attention and has been studied in numerous application fields. Several algorithms have been proposed in the context of $alpha$-fair resource sharing to distributively compute its value. However, little work has been done on its structural properties. In this work, we present a lower bound for the optimal solution of the weighted $alpha$-fair resource allocation problem and compare it with existing propositions in the literature. Our derivations rely on a localization property verified by optimization problems with separable objective that permit one to better exploit their local structures. We give a local version of the well-known midpoint domination axiom used to axiomatically build the Nash Bargaining Solution (or proportionally fair resource allocation problem). Moreover, we show how our lower bound can improve the performances of a distributed algorithm based on the Alternating Directions Method of Multipliers (ADMM). The evaluation of the algorithm shows that our lower bound can considerably reduce its convergence time up to two orders of magnitude compared to when the bound is not used at all or is simply looser.
Load shedding has been one of the most widely used and effective emergency control approaches against voltage instability. With increased uncertainties and rapidly changing operational conditions in power systems, existing methods have outstanding issues in terms of either speed, adaptiveness, or scalability. Deep reinforcement learning (DRL) was regarded and adopted as a promising approach for fast and adaptive grid stability control in recent years. However, existing DRL algorithms show two outstanding issues when being applied to power system control problems: 1) computational inefficiency that requires extensive training and tuning time; and 2) poor scalability making it difficult to scale to high dimensional control problems. To overcome these issues, an accelerated DRL algorithm named PARS was developed and tailored for power system voltage stability control via load shedding. PARS features high scalability and is easy to tune with only five main hyperparameters. The method was tested on both the IEEE 39-bus and IEEE 300-bus systems, and the latter is by far the largest scale for such a study. Test results show that, compared to other methods including model-predictive control (MPC) and proximal policy optimization(PPO) methods, PARS shows better computational efficiency (faster convergence), more robustness in learning, excellent scalability and generalization capability.
In this paper, the problem of opportunistic spectrum sharing for the next generation of wireless systems empowered by the cloud radio access network (C-RAN) is studied. More precisely, low-priority users employ cooperative spectrum sensing to detect a vacant portion of the spectrum that is not currently used by high-priority users. The design of the scheme is to maximize the overall throughput of the low-priority users while guaranteeing the quality of service of the high-priority users. This objective is attained by optimally adjusting spectrum sensing time with respect to imposed target probabilities of detection and false alarm as well as dynamically allocating and assigning C-RAN resources, i.e., transmit powers, sub-carriers, remote radio heads (RRHs), and base-band units. The presented optimization problem is non-convex and NP-hard that is extremely hard to tackle directly. To solve the problem, a low-complex iterative approach is proposed in which sensing time, user association parameters and transmit powers of RRHs are alternatively assigned and optimized at every step. Numerical results are then provided to demonstrate the necessity of performing sensing time adjustment in such systems as well as balancing the sensing-throughput tradeoff.