ترغب بنشر مسار تعليمي؟ اضغط هنا

Weak Closed-Loop Solvability of Stochastic Linear Quadratic Optimal Control Problems of Markovian Regime Switching System

63   0   0.0 ( 0 )
 نشر من قبل Jiaqiang Wen
 تاريخ النشر 2019
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper, we investigate the open-loop and weak closed-loop solvabilities of stochastic linear quadratic (LQ, for short) optimal control problem of Markovian regime switching system. Interestingly, these two solvabilities are equivalent. We first provide an alternative characterization of the open-loop solvability of the LQ problem using the perturbation approach. Then, we study the weak closed-loop solvability of the LQ problem of Markovian regime switching system, and establish the equivalent relationship between open-loop and weak closed-loop solvabilities. Finally, we present an example to illustrate the procedure for finding weak closed-loop optimal strategies within the framework of Markovian regime switching system.



قيم البحث

اقرأ أيضاً

95 - Jingrui Sun , Zhen Wu , Jie Xiong 2021
This paper is concerned with a backward stochastic linear-quadratic (LQ, for short) optimal control problem with deterministic coefficients. The weighting matrices are allowed to be indefinite, and cross-product terms in the control and state process es are present in the cost functional. Based on a Hilbert space method, necessary and sufficient conditions are derived for the solvability of the problem, and a general approach for constructing optimal controls is developed. The crucial step in this construction is to establish the solvability of a Riccati-type equation, which is accomplished under a fairly weak condition by investigating the connection with forward stochastic LQ optimal control problems.
We provide an exhaustive treatment of Linear-Quadratic control problems for a class of stochastic Volterra equations of convolution type, whose kernels are Laplace transforms of certain signed matrix measures which are not necessarily finite. These e quations are in general neither Markovian nor semimartingales, and include the fractional Brownian motion with Hurst index smaller than $1/2$ as a special case. We establish the correspondence of the initial problem with a possibly infinite dimensional Markovian one in a Banach space, which allows us to identify the Markovian controlled state variables. Using a refined martingale verification argument combined with a squares completion technique, we prove that the value function is of linear quadratic form in these state variables with a linear optimal feedback control, depending on non-standard Banach space valued Riccati equations. Furthermore, we show that the value function of the stochastic Volterra optimization problem can be approximated by that of conventional finite dimensional Markovian Linear--Quadratic problems, which is of crucial importance for numerical implementation.
208 - Juan Li , Shanjian Tang 2012
In this paper we study the optimal stochastic control problem for stochastic differential systems reflected in a domain. The cost functional is a recursive one, which is defined via generalized backward stochastic differential equations developed by Pardoux and Zhang [20]. The value function is shown to be the unique viscosity solution to the associated Hamilton-Jacobi-Bellman equation, which is a fully nonlinear parabolic partial differential equation with a nonlinear Neumann boundary condition. For this, we also prove some new estimates for stochastic differential systems reflected in a domain.
This paper presents a state and state-input constrained variant of the discrete-time iterative Linear Quadratic Regulator (iLQR) algorithm, with linear time-complexity in the number of time steps. The approach is based on a projection of the control input onto the nullspace of the linearized constraints. We derive a fully constraint-compliant feedforward-feedback control update rule, for which we can solve efficiently with Riccati-style difference equations. We assume that the relative degree of all constraints in the discrete-time system model is equal to one, which often holds for robotics problems employing rigid-body dynamic models. Simulation examples, including a 6 DoF robotic arm, are given to validate and illustrate the performance of the method.
100 - Na Li , Xun Li , Jing Peng 2020
This paper applies a reinforcement learning (RL) method to solve infinite horizon continuous-time stochastic linear quadratic problems, where drift and diffusion terms in the dynamics may depend on both the state and control. Based on Bellmans dynami c programming principle, an online RL algorithm is presented to attain the optimal control with just partial system information. This algorithm directly computes the optimal control rather than estimating the system coefficients and solving the related Riccati equation. It just requires local trajectory information, greatly simplifying the calculation processing. Two numerical examples are carried out to shed light on our theoretical findings.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا