Do you want to publish a course? Click here

Cloud-Based Dynamic Programming for an Electric City Bus Energy Management Considering Real-Time Passenger Load Prediction

64   0   0.0 ( 0 )
 Added by Junzhe Shi
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Electric city bus gains popularity in recent years for its low greenhouse gas emission, low noise level, etc. Different from a passenger car, the weight of a city bus varies significantly with different amounts of onboard passengers, which is not well studied in existing literature. This study proposes a passenger load prediction model using day-of-week, time-of-day, weather, temperatures, wind levels, and holiday information as inputs. The average model, Regression Tree, Gradient Boost Decision Tree, and Neural Networks models are compared in the passenger load prediction. The Gradient Boost Decision Tree model is selected due to its best accuracy and high stability. Given the predicted passenger load, dynamic programming algorithm determines the optimal power demand for supercapacitor and battery by optimizing the battery aging and energy usage in the cloud. Then rule extraction is conducted on dynamic programming results, and the rule is real-time loaded to onboard controllers of vehicles. The proposed cloud-based dynamic programming and rule extraction framework with the passenger load prediction shows 4% and 11% fewer bus operating costs in off-peak and peak hours, respectively. The operating cost by the proposed framework is less than 1% shy of the dynamic programming with the true passenger load information.



rate research

Read More

Electric vehicles (EVs) have been growing rapidly in popularity in recent years and have become a future trend. It is an important aspect of user experience to know the Remaining Charging Time (RCT) of an EV with confidence. However, it is difficult to find an algorithm that accurately estimates the RCT for vehicles in the current EV market. The maximum RCT estimation error of the Tesla Model X can be as high as 60 minutes from a 10 % to 99 % state-of-charge (SOC) while charging at direct current (DC). A highly accurate RCT estimation algorithm for electric vehicles is in high demand and will continue to be as EVs become more popular. There are currently two challenges to arriving at an accurate RCT estimate. First, most commercial chargers cannot provide requested charging currents during a constant current (CC) stage. Second, it is hard to predict the charging current profile in a constant voltage (CV) stage. To address the first issue, this study proposes an RCT algorithm that updates the charging accuracy online in the CC stage by considering the confidence interval between the historical charging accuracy and real-time charging accuracy data. To solve the second issue, this study proposes a battery resistance prediction model to predict charging current profiles in the CV stage, using a Radial Basis Function (RBF) neural network (NN). The test results demonstrate that the RCT algorithm proposed in this study achieves an error rate improvement of 73.6 % and 84.4 % over the traditional method in the CC and CV stages, respectively.
As a model-free optimization and decision-making method, deep reinforcement learning (DRL) has been widely applied to the filed of energy management in energy Internet. While, some DRL-based energy management schemes also incorporate the prediction module used by the traditional model-based methods, which seems to be unnecessary and even adverse. In this work, we present the standard DRL-based energy management scheme with and without prediction. Then, these two schemes are compared in the unified energy management framework. The simulation results demonstrate that the energy management scheme without prediction is superior over the scheme with prediction. This work intends to rectify the misuse of DRL methods in the field of energy management.
Microgrid (MG) energy management is an important part of MG operation. Various entities are generally involved in the energy management of an MG, e.g., energy storage system (ESS), renewable energy resources (RER) and the load of users, and it is crucial to coordinate these entities. Considering the significant potential of machine learning techniques, this paper proposes a correlated deep Q-learning (CDQN) based technique for the MG energy management. Each electrical entity is modeled as an agent which has a neural network to predict its own Q-values, after which the correlated Q-equilibrium is used to coordinate the operation among agents. In this paper, the Long Short Term Memory networks (LSTM) based deep Q-learning algorithm is introduced and the correlated equilibrium is proposed to coordinate agents. The simulation result shows 40.9% and 9.62% higher profit for ESS agent and photovoltaic (PV) agent, respectively.
The empirical mode decomposition (EMD) method and its variants have been extensively employed in the load and renewable forecasting literature. Using this multiresolution decomposition, time series (TS) related to the historical load and renewable generation are decomposed into several intrinsic mode functions (IMFs), which are less non-stationary and non-linear. As such, the prediction of the components can theoretically be carried out with notably higher precision. The EMD method is prone to several issues, including modal aliasing and boundary effect problems, but the TS decomposition-based load and renewable generation forecasting literature primarily focuses on comparing the performance of different decomposition approaches from the forecast accuracy standpoint; as a result, these problems have rarely been scrutinized. Underestimating these issues can lead to poor performance of the forecast model in real-time applications. This paper examines these issues and their importance in the model development stage. Using real-world data, EMD-based models are presented, and the impact of the boundary effect is illustrated.
This paper presents a constrained deep adaptive dynamic programming (CDADP) algorithm to solve general nonlinear optimal control problems with known dynamics. Unlike previous ADP algorithms, it can directly deal with problems with state constraints. Both the policy and value function are approximated by deep neural networks (NNs), which directly map the system state to action and value function respectively without needing to use hand-crafted basis function. The proposed algorithm considers the state constraints by transforming the policy improvement process to a constrained optimization problem. Meanwhile, a trust region constraint is added to prevent excessive policy update. We first linearize this constrained optimization problem locally into a quadratically-constrained quadratic programming problem, and then obtain the optimal update of policy network parameters by solving its dual problem. We also propose a series of recovery rules to update the policy in case the primal problem is infeasible. In addition, parallel learners are employed to explore different state spaces and then stabilize and accelerate the learning speed. The vehicle control problem in path-tracking task is used to demonstrate the effectiveness of this proposed method.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا