ترغب بنشر مسار تعليمي؟ اضغط هنا

On the Turnpike Property and the Receding-Horizon Method for Linear-Quadratic Optimal Control Problems

218   0   0.0 ( 0 )
 نشر من قبل Laurent Pfeiffer
 تاريخ النشر 2018
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

Optimal control problems with a very large time horizon can be tackled with the Receding Horizon Control (RHC) method, which consists in solving a sequence of optimal control problems with small prediction horizon. The main result of this article is the proof of the exponential convergence (with respect to the prediction horizon) of the control generated by the RHC method towards the exact solution of the problem. The result is established for a class of infinite-dimensional linear-quadratic optimal control problems with time-independent dynamics and integral cost. Such problems satisfy the turnpike property: the optimal trajectory remains most of the time very close to the solution to the associated static optimization problem. Specific terminal cost functions, derived from the Lagrange multiplier associated with the static optimization problem, are employed in the implementation of the RHC method.



قيم البحث

اقرأ أيضاً

95 - Jingrui Sun , Zhen Wu , Jie Xiong 2021
This paper is concerned with a backward stochastic linear-quadratic (LQ, for short) optimal control problem with deterministic coefficients. The weighting matrices are allowed to be indefinite, and cross-product terms in the control and state process es are present in the cost functional. Based on a Hilbert space method, necessary and sufficient conditions are derived for the solvability of the problem, and a general approach for constructing optimal controls is developed. The crucial step in this construction is to establish the solvability of a Riccati-type equation, which is accomplished under a fairly weak condition by investigating the connection with forward stochastic LQ optimal control problems.
This paper is devoted to analysing the explicit slow decay rate and turnpike in the infinite-horizon linear quadratic optimal control problems for hyperbolic systems. Assume that some weak observability or controllability are satisfied, by which, the lower and upper bounds of the corresponding algebraic Riccati operator are estimated, respectively. Then based on these two bounds, the explicit slow decay rate of the closed-loop system with Riccati-based optimal feedback control is obtained. The averaged turnpike property for this problem is also further discussed. We then apply these results to the LQ optimal control problems constraint to networks of one-dimensional wave equations and also some multi-dimensional ones with local controls which lack of GCC(Geometric Control Condition).
100 - Na Li , Xun Li , Jing Peng 2020
This paper applies a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where drift and diffusion terms in the dynamics may depend on both the state and control. Based on Bellmans dynami c programming principle, an online RL algorithm is presented to attain the optimal control with just partial system information. This algorithm directly computes the optimal control rather than estimating the system coefficients and solving the related Riccati equation. It just requires local trajectory information, greatly simplifying the calculation processing. Two numerical examples are carried out to shed light on our theoretical findings.
The widespread adoption of nonlinear Receding Horizon Control (RHC) strategies by industry has led to more than 30 years of intense research efforts to provide stability guarantees for these methods. However, current theoretical guarantees require th at each (generally nonconvex) planning problem can be solved to (approximate) global optimality, which is an unrealistic requirement for the derivative-based local optimization methods generally used in practical implementations of RHC. This paper takes the first step towards understanding stability guarantees for nonlinear RHC when the inner planning problem is solved to first-order stationary points, but not necessarily global optima. Special attention is given to feedback linearizable systems, and a mixture of positive and negative results are provided. We establish that, under certain strong conditions, first-order solutions to RHC exponentially stabilize linearizable systems. Crucially, this guarantee requires that state costs applied to the planning problems are in a certain sense `compatible with the global geometry of the system, and a simple counter-example demonstrates the necessity of this condition. These results highlight the need to rethink the role of global geometry in the context of optimization-based control.
The linear-quadratic regulator (LQR) is an efficient control method for linear and linearized systems. Typically, LQR is implemented in minimal coordinates (also called generalized or joint coordinates). However, other coordinates are possible and re cent research suggests that there may be numerical and control-theoretic advantages when using higher-dimensional non-minimal state parameterizations for dynamical systems. One such parameterization is maximal coordinates, in which each link in a multi-body system is parameterized by its full six degrees of freedom and joints between links are modeled with algebraic constraints. Such constraints can also represent closed kinematic loops or contact with the environment. This paper investigates the difference between minimal- and maximal-coordinate LQR control laws. A case study of applying LQR to a simple pendulum and simulations comparing the basins of attraction and tracking performance of minimal- and maximal-coordinate LQR controllers suggest that maximal-coordinate LQR achieves greater robustness and improved tracking performance compared to minimal-coordinate LQR when applied to nonlinear systems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا