No Arabic abstract
With the advances in the Internet of Things technology, electric vehicles (EVs) have become easier to schedule in daily life, which is reshaping the electric load curve. It is important to design efficient charging algorithms to mitigate the negative impact of EV charging on the power grid. This paper investigates an EV charging scheduling problem to reduce the charging cost while shaving the peak charging load, under unknown future information about EVs, such as arrival time, departure time, and charging demand. First, we formulate an EV charging problem to minimize the electricity bill of the EV fleet and study the EV charging problem in an online setting without knowing future information. We develop an actor-critic learning-based smart charging algorithm (SCA) to schedule the EV charging against the uncertainties in EV charging behaviors. The SCA learns an optimal EV charging strategy with continuous charging actions instead of discrete approximation of charging. We further develop a more computationally efficient customized actor-critic learning charging algorithm (CALC) by reducing the state dimension and thus improving the computational efficiency. Finally, simulation results show that our proposed SCA can reduce EVs expected cost by 24.03%, 21.49%, 13.80%, compared with the Eagerly Charging Algorithm, Online Charging Algorithm, RL-based Adaptive Energy Management Algorithm, respectively. CALC is more computationally efficient, and its performance is close to that of SCA with only a gap of 5.56% in the cost.
Electric vehicles (EVs) are an eco-friendly alternative to vehicles with internal combustion engines. Despite their environmental benefits, the massive electricity demand imposed by the anticipated proliferation of EVs could jeopardize the secure and economic operation of the power grid. Hence, proper strategies for charging coordination will be indispensable to the future power grid. Coordinated EV charging schemes can be implemented as centralized, decentralized, and hierarchical systems, with the last two, referred to as distributed charging control systems. This paper reviews the recent literature of distributed charging control schemes, where the computations are distributed across multiple EVs and/or aggregators. First, we categorize optimization problems for EV charging in terms of operational aspects and cost aspects. Then under each category, we provide a comprehensive discussion on algorithms for distributed EV charge scheduling, considering the perspectives of the grid operator, the aggregator, and the EV user. We also discuss how certain algorithms proposed in the literature cope with various uncertainties inherent to distributed EV charging control problems. Finally, we outline several research directions that require further attention.
We describe the architecture and algorithms of the Adaptive Charging Network (ACN), which was first deployed on the Caltech campus in early 2016 and is currently operating at over 100 other sites in the United States. The architecture enables real-time monitoring and control and supports electric vehicle (EV) charging at scale. The ACN adopts a flexible Adaptive Scheduling Algorithm based on convex optimization and model predictive control and allows for significant over-subscription of electrical infrastructure. We describe some of the practical challenges in real-world charging systems, including unbalanced three-phase infrastructure, non-ideal battery charging behavior, and quantized control signals. We demonstrate how the Adaptive Scheduling Algorithm handles these challenges, and compare its performance against baseline algorithms from the deadline scheduling literature using real workloads recorded from the Caltech ACN and accurate system models. We find that in these realistic settings, our scheduling algorithm can improve operator profit by 3.4 times over uncontrolled charging and consistently outperforms baseline algorithms when delivering energy in highly congested systems.
As an environment-friendly substitute for conventional fuel-powered vehicles, electric vehicles (EVs) and their components have been widely developed and deployed worldwide. The large-scale integration of EVs into power grid brings both challenges and opportunities to the system performance. On one hand, the load demand from EV charging imposes large impact on the stability and efficiency of power grid. On the other hand, EVs could potentially act as mobile energy storage systems to improve the power network performance, such as load flattening, fast frequency control, and facilitating renewable energy integration. Evidently, uncontrolled EV charging could lead to inefficient power network operation or even security issues. This spurs enormous research interests in designing charging coordination mechanisms. A key design challenge here lies in the lack of complete knowledge of events that occur in the future. Indeed, the amount of knowledge of future events significantly impacts the design of efficient charging control algorithms. This article focuses on introducing online EV charging scheduling techniques that deal with different degrees of uncertainty and randomness of future knowledge. Besides, we highlight the promising future research directions for EV charging control.
The proliferation of plug-in electric vehicles (PEVs) advocates a distributed paradigm for the coordination of PEV charging. Distinct from existing primal-dual decomposition or consensus methods, this paper proposes a cutting-plane based distributed algorithm, which enables an asynchronous coordination while well preserving individuals private information. To this end, an equivalent surrogate model is first constructed by exploiting the duality of the original optimization problem, which masks the private information of individual users by a transformation. Then, a cutting-plane based algorithm is derived to solve the surrogate problem in a distributed manner with intrinsic superiority to cope with various asynchrony. Critical implementation issues, such as the distributed initialization, cutting-plane generation and localized stopping criteria, are discussed in detail. Numerical tests on IEEE 37- and 123-node feeders with real data show that the proposed method is resilient to a variety of asynchrony and admits the plug-and-play operation mode. It is expected the proposed methodology provides an alternative path toward a more practical protocol for PEV charging.
We consider the problem of demand-side energy management, where each household is equipped with a smart meter that is able to schedule home appliances online. The goal is to minimise the overall cost under a real-time pricing scheme. While previous works have introduced centralised approaches, we formulate the smart grid environment as a Markov game, where each household is a decentralised agent, and the grid operator produces a price signal that adapts to the energy demand. The main challenge addressed in our approach is partial observability and perceived non-stationarity of the environment from the viewpoint of each agent. We propose a multi-agent extension of a deep actor-critic algorithm that shows success in learning in this environment. This algorithm learns a centralised critic that coordinates training of all agents. Our approach thus uses centralised learning but decentralised execution. Simulation results show that our online deep reinforcement learning method can reduce both the peak-to-average ratio of total energy consumed and the cost of electricity for all households based purely on instantaneous observations and a price signal.