No Arabic abstract
Averting the effects of anthropogenic climate change requires a transition from fossil fuels to low-carbon technology. A way to achieve this is to decarbonize the electricity grid. However, further efforts must be made in other fields such as transport and heating for full decarbonization. This would reduce carbon emissions due to electricity generation, and also help to decarbonize other sources such as automotive and heating by enabling a low-carbon alternative. Carbon taxes have been shown to be an efficient way to aid in this transition. In this paper, we demonstrate how to to find optimal carbon tax policies through a genetic algorithm approach, using the electricity market agent-based model ElecSim. To achieve this, we use the NSGA-II genetic algorithm to minimize average electricity price and relative carbon intensity of the electricity mix. We demonstrate that it is possible to find a range of carbon taxes to suit differing objectives. Our results show that we are able to minimize electricity cost to below textsterling10/MWh as well as carbon intensity to zero in every case. In terms of the optimal carbon tax strategy, we found that an increasing strategy between 2020 and 2035 was preferable. Each of the Pareto-front optimal tax strategies are at least above textsterling81/tCO2 for every year. The mean carbon tax strategy was textsterling240/tCO2.
Price-based demand response (PBDR) has recently been attributed great economic but also environmental potential. However, the determination of its short-term effects on carbon emissions requires the knowledge of marginal emission factors (MEFs), which compared to grid mix emission factors (XEFs), are cumbersome to calculate due to the complex characteristics of national electricity markets. This study, therefore, proposes two merit order-based methods to approximate hourly MEFs and applies it to readily available datasets from 20 European countries for the years 2017-2019. Based on the resulting electricity prices, MEFs, and XEFs, standardized daily load shifts were simulated to quantify their effects on marginal costs and carbon emissions. Finally, by repeating the load shift simulations for different carbon price levels, the impact of the carbon price on the resulting carbon emissions was analyzed. Interestingly, the simulated price-based load shifts led to increases in operational carbon emissions for 8 of the 20 countries and to an average increase of 2.1% across all 20 countries. Switching from price-based to MEF-based load shifts reduced the corresponding carbon emissions to a decrease of 35%, albeit with 56% lower monetary cost savings compared to the price-based load shifts. Under specific circumstances, PBDR leads to an increase in carbon emissions, mainly due to the economic advantage fuel sources such as lignite and coal have in the merit order. However, as the price of carbon is increased, the correlation between the carbon intensity and the marginal cost of the fuels substantially increases. Therefore, with adequate carbon prices, PBDR can be an effective tool for both economical and environmental improvement.
Decentralized multi-agent control has broad applications, ranging from multi-robot cooperation to distributed sensor networks. In decentralized multi-agent control, systems are complex with unknown or highly uncertain dynamics, where traditional model-based control methods can hardly be applied. Compared with model-based control in control theory, deep reinforcement learning (DRL) is promising to learn the controller/policy from data without the knowing system dynamics. However, to directly apply DRL to decentralized multi-agent control is challenging, as interactions among agents make the learning environment non-stationary. More importantly, the existing multi-agent reinforcement learning (MARL) algorithms cannot ensure the closed-loop stability of a multi-agent system from a control-theoretic perspective, so the learned control polices are highly possible to generate abnormal or dangerous behaviors in real applications. Hence, without stability guarantee, the application of the existing MARL algorithms to real multi-agent systems is of great concern, e.g., UAVs, robots, and power systems, etc. In this paper, we aim to propose a new MARL algorithm for decentralized multi-agent control with a stability guarantee. The new MARL algorithm, termed as a multi-agent soft-actor critic (MASAC), is proposed under the well-known framework of centralized-training-with-decentralized-execution. The closed-loop stability is guaranteed by the introduction of a stability constraint during the policy improvement in our MASAC algorithm. The stability constraint is designed based on Lyapunovs method in control theory. To demonstrate the effectiveness, we present a multi-agent navigation example to show the efficiency of the proposed MASAC algorithm.
Renewable-dominant power systems explore options to procure virtual inertia services from non-synchronous resources (e.g., batteries, wind turbines) in addition to inertia traditionally provided by synchronous resources (e.g., thermal generators). This paper designs a stochastic electricity market that produces co-optimized and efficient prices for energy, reserve and inertia. We formulate a convex chance-constrained stochastic unit commitment model with inertia requirements and obtain equilibrium energy, reserve and inertia prices using convex duality. Numerical experiments on an illustrative system and a modified IEEE 118-bus systems show the performance of the proposed pricing mechanism.
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting. We consider the problem of improving the throughput of a scaled model of the San Francisco-Oakland Bay Bridge: a two-stage bottleneck where four lanes reduce to two and then reduce to one. Although there is extensive work examining variants of bottleneck control in a centralized setting, there is less study of the challenging multi-agent setting where the large number of interacting AVs leads to significant optimization difficulties for reinforcement learning methods. We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved. We compare our results to a hand-designed feedback controller and demonstrate that our results sharply outperform the feedback controller despite extensive tuning. Additionally, we demonstrate that the RL-based controllers adopt a robust strategy that works across penetration rates whereas the feedback controllers degrade immediately upon penetration rate variation. We investigate the feasibility of both action and observation decentralization and demonstrate that effective strategies are possible using purely local sensing. Finally, we open-source our code at https://github.com/eugenevinitsky/decentralized_bottlenecks.
With the increased level of distributed generation and demand response comes the need for associated mechanisms that can perform well in the face of increasingly complex deregulated energy market structures. Using Lagrangian duality theory, we develop a decentralized market mechanism that ensures that, under the guidance of a market operator, self-interested market participants: generation companies (GenCos), distribution companies (DistCos), and transmission companies (TransCos), reach a competitive equilibrium. We show that even in the presence of informational asymmetries and nonlinearities (such as power losses and transmission constraints), the resulting competitive equilibrium is Pareto efficient.