No Arabic abstract
The rapidly growing use of lithium-ion batteries across various industries highlights the pressing issue of optimal charging control, as charging plays a crucial role in the health, safety and life of batteries. The literature increasingly adopts model predictive control (MPC) to address this issue, taking advantage of its capability of performing optimization under constraints. However, the computationally complex online constrained optimization intrinsic to MPC often hinders real-time implementation. This paper is thus proposed to develop a framework for real-time charging control based on explicit MPC (eMPC), exploiting its advantage in characterizing an explicit solution to an MPC problem, to enable real-time charging control. The study begins with the formulation of MPC charging based on a nonlinear equivalent circuit model. Then, multi-segment linearization is conducted to the original model, and applying the eMPC design to the obtained linear models leads to a charging control algorithm. The proposed algorithm shifts the constrained optimization to offline by precomputing explicit solutions to the charging problem and expressing the charging law as piecewise affine functions. This drastically reduces not only the online computational costs in the control run but also the difficulty of coding. Extensive numerical simulation and experimental results verify the effectiveness of the proposed eMPC charging control framework and algorithm. The research results can potentially meet the needs for real-time battery management running on embedded hardware.
Lithium-ion battery packs are usually composed of hundreds of cells arranged in series and parallel connections. The proper functioning of these complex devices requires suitable Battery Management Systems (BMSs). Advanced BMSs rely on mathematical models to assure safety and high performance. While many approaches have been proposed for the management of single cells, the control of multiple cells has been less investigated and usually relies on simplified models such as equivalent circuit models. This paper addresses the management of a battery pack in which each cell is explicitly modelled as the Single Particle Model with electrolyte and thermal dynamics. A nonlinear Model Predictive Control (MPC) is presented for optimally charging the battery pack while taking voltage and temperature limits on each cell into account. Since the computational cost of nonlinear MPC grows significantly with the complexity of the underlying model, a sensitivity-based MPC (sMPC) is proposed, in which the model adopted is obtained by linearizing the dynamics along a nominal trajectory that is updated over time. The resulting sMPC optimizations are quadratic programs which can be solved in real-time even for large battery packs (e.g. fully electric motorbike with 156 cells) while achieving the same performance of the nonlinear MPC.
We study operations of a battery energy storage system under a baseline-based demand response (DR) program with an uncertain schedule of DR events. Baseline-based DR programs may provide undesired incentives to inflate baseline consumption in non-event days, in order to increase apparent DR reduction in event days and secure higher DR payments. Our goal is to identify and quantify such incentives. To understand customer decisions, we formulate the problem of determining hourly battery charging and discharge schedules to minimize expected net costs, defined as energy purchase costs minus energy export rebates and DR payments, over a sufficiently long time horizon (e.g., a year). The complexity of this stochastic optimization problem grows exponentially with the time horizon considered. To obtain computationally tractable solutions, we propose using multistage model predictive control with scenario sampling. Numerical results indicate that our solutions are near optimal (e.g., within 3% from the optimum in the test cases). Finally, we apply our solutions to study an example residential customer with solar photovoltaic and battery systems participating in a typical existing baseline-based DR program. Results reveal that over 66% of the average apparent load reduction during DR events could result from inflation of baseline consumption during non-event days.
In this paper we present a Learning Model Predictive Control (LMPC) strategy for linear and nonlinear time optimal control problems. Our work builds on existing LMPC methodologies and it guarantees finite time convergence properties for the closed-loop system. We show how to construct a time varying safe set and terminal cost function using closed-loop data. The resulting LMPC policy is time varying and it guarantees recursive constraint satisfaction and non-decreasing performance. Computational efficiency is obtained by convexifing the safe set and terminal cost function. We demonstrate that, for a class of nonlinear system and convex constraints, the convex LMPC formulation guarantees recursive constraint satisfaction and non-decreasing performance. Finally, we illustrate the effectiveness of the proposed strategies on minimum time obstacle avoidance and racing examples.
Efficiently computing the optimal control policy concerning a complicated future with stochastic disturbance has always been a challenge. The predicted stochastic future disturbance can be represented by a scenario tree, but solving the optimal control problem with a scenario tree is usually computationally demanding. In this paper, we propose a data-based clustering approximation method for the scenario tree representation. Differently from the popular Markov chain approximation, the proposed method can retain information from previous steps while keeping the state space size small. Then the predictive optimal control problem can be approximately solved with reduced computational load using dynamic programming. The proposed method is evaluated in numerical examples and compared with the method which considers the disturbance as a non-stationary Markov chain. The results show that the proposed method can achieve better control performance than the Markov chain method.
This paper proposes an off-line algorithm, called Recurrent Model Predictive Control (RMPC), to solve general nonlinear finite-horizon optimal control problems. Unlike traditional Model Predictive Control (MPC) algorithms, it can make full use of the current computing resources and adaptively select the longest model prediction horizon. Our algorithm employs a recurrent function to approximate the optimal policy, which maps the system states and reference values directly to the control inputs. The number of prediction steps is equal to the number of recurrent cycles of the learned policy function. With an arbitrary initial policy function, the proposed RMPC algorithm can converge to the optimal policy by directly minimizing the designed loss function. We further prove the convergence and optimality of the RMPC algorithm thorough Bellman optimality principle, and demonstrate its generality and efficiency using two numerical examples.