ترغب بنشر مسار تعليمي؟ اضغط هنا

An Intelligent Energy Management Framework for Hybrid-Electric Propulsion Systems Using Deep Reinforcement Learning

134   0   0.0 ( 0 )
 نشر من قبل Peng Wu
 تاريخ النشر 2021
والبحث باللغة English




اسأل ChatGPT حول البحث

Hybrid-electric propulsion systems powered by clean energy derived from renewable sources offer a promising approach to decarbonise the worlds transportation systems. Effective energy management systems are critical for such systems to achieve optimised operational performance. However, developing an intelligent energy management system for applications such as ships operating in a highly stochastic environment and requiring concurrent control over multiple power sources presents challenges. This article proposes an intelligent energy management framework for hybrid-electric propulsion systems using deep reinforcement learning. In the proposed framework, a Twin-Delayed Deep Deterministic Policy Gradient agent is trained using an extensive volume of historical load profiles to generate a generic energy management strategy. The strategy, i.e. the core of the energy management system, can concurrently control multiple power sources in continuous state and action spaces. The proposed framework is applied to a coastal ferry model with multiple fuel cell clusters and a battery, achieving near-optimal cost performance when applied to novel future voyages.



قيم البحث

اقرأ أيضاً

New forms of on-demand transportation such as ride-hailing and connected autonomous vehicles are proliferating, yet are a challenging use case for electric vehicles (EV). This paper explores the feasibility of using deep reinforcement learning (DRL) to optimize a driving and charging policy for a ride-hailing EV agent, with the goal of reducing costs and emissions while increasing transportation service provided. We introduce a data-driven simulation of a ride-hailing EV agent that provides transportation service and charges energy at congested charging infrastructure. We then formulate a test case for the sequential driving and charging decision making problem of the agent and apply DRL to optimize the agents decision making policy. We evaluate the performance against hand-written policies and show that our agent learns to act competitively without any prior knowledge.
161 - Teng Liu , Bo Wang , Wenhao Tan 2020
Real-time applications of energy management strategies (EMSs) in hybrid electric vehicles (HEVs) are the harshest requirements for researchers and engineers. Inspired by the excellent problem-solving capabilities of deep reinforcement learning (DRL), this paper proposes a real-time EMS via incorporating the DRL method and transfer learning (TL). The related EMSs are derived from and evaluated on the real-world collected driving cycle dataset from Transportation Secure Data Center (TSDC). The concrete DRL algorithm is proximal policy optimization (PPO) belonging to the policy gradient (PG) techniques. For specification, many source driving cycles are utilized for training the parameters of deep network based on PPO. The learned parameters are transformed into the target driving cycles under the TL framework. The EMSs related to the target driving cycles are estimated and compared in different training conditions. Simulation results indicate that the presented transfer DRL-based EMS could effectively reduce time consumption and guarantee control performance.
Building energy management is one of the core problems in modern power grids to reduce energy consumption while ensuring occupants comfort. However, the building energy management system (BEMS) is now facing more challenges and uncertainties with the increasing penetration of renewables and complicated interactions between humans and buildings. Classical model predictive control (MPC) has shown its capacity to reduce building energy consumption, but it suffers from labor-intensive modelling and complex on-line control optimization. Recently, with the growing accessibility to the building control and automation data, data-driven solutions have attracted more research interest. This paper presents a compact review of the recent advances in data-driven MPC and reinforcement learning based control methods for BEMS. The main challenges in these approaches and insights on the selection of a control method are discussed.
104 - Hyunsung Lee 2020
Storage systems for cloud computing merge a large number of commodity computers into a single large storage pool. It provides high-performance storage over an unreliable, and dynamic network at a lower cost than purchasing and maintaining large mainf rame. In this paper, we examine whether it is feasible to apply Reinforcement Learning(RL) to system domain problems. Our experiments show that the RL model is comparable, even outperform other heuristics for block management problem. However, our experiments are limited in terms of scalability and fidelity. Even though our formulation is not very practical,applying Reinforcement Learning to system domain could offer good alternatives to existing heuristics.
The cost of the power distribution infrastructures is driven by the peak power encountered in the system. Therefore, the distribution network operators consider billing consumers behind a common transformer in the function of their peak demand and le ave it to the consumers to manage their collective costs. This management problem is, however, not trivial. In this paper, we consider a multi-agent residential smart grid system, where each agent has local renewable energy production and energy storage, and all agents are connected to a local transformer. The objective is to develop an optimal policy that minimizes the economic cost consisting of both the spot-market cost for each consumer and their collective peak-power cost. We propose to use a parametric Model Predictive Control (MPC)-scheme to approximate the optimal policy. The optimality of this policy is limited by its finite horizon and inaccurate forecasts of the local power production-consumption. A Deterministic Policy Gradient (DPG) method is deployed to adjust the MPC parameters and improve the policy. Our simulations show that the proposed MPC-based Reinforcement Learning (RL) method can effectively decrease the long-term economic cost for this smart grid problem.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا