ترغب بنشر مسار تعليمي؟ اضغط هنا

Linear-quadratic control of Volterra integral systems and extensions

79   0   0.0 ( 0 )
 نشر من قبل S. A. Belbas
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study linear-quadratic optimal control problems for Voterra systems, and problems that are linear-quadratic in the control but generally nonlinear in the state. In the case of linear-quadratic Volterra control, we obtain sharp necessary and sufficient conditions for optimality. For problems that are linear-quadratic in the control only, we obtain a novel form of necessary conditions in the form of double Volterra equation; we prove the solvability of such equations.

قيم البحث

اقرأ أيضاً

This paper is concerned with a linear quadratic optimal control for a class of singular Volterra integral equations. Under proper convexity conditions, optimal control uniquely exists, and it could be characterized via Frechet derivative of the quadr atic functional in a Hilbert space or via maximum principle type necessary conditions. However, these (equivalent) characterizations have a shortcoming that the current value of the optimal control depends on the future values of the optimal state. Practically, this is not feasible. The main purpose of this paper is to obtain a causal state feedback representation of the optimal control.
We provide an exhaustive treatment of Linear-Quadratic control problems for a class of stochastic Volterra equations of convolution type, whose kernels are Laplace transforms of certain signed matrix measures which are not necessarily finite. These e quations are in general neither Markovian nor semimartingales, and include the fractional Brownian motion with Hurst index smaller than $1/2$ as a special case. We establish the correspondence of the initial problem with a possibly infinite dimensional Markovian one in a Banach space, which allows us to identify the Markovian controlled state variables. Using a refined martingale verification argument combined with a squares completion technique, we prove that the value function is of linear quadratic form in these state variables with a linear optimal feedback control, depending on non-standard Banach space valued Riccati equations. Furthermore, we show that the value function of the stochastic Volterra optimization problem can be approximated by that of conventional finite dimensional Markovian Linear--Quadratic problems, which is of crucial importance for numerical implementation.
We consider the linear quadratic Gaussian control problem with a discounted cost functional for descriptor systems on the infinite time horizon. Based on recent results from the deterministic framework, we characterize the feasibility of this problem using a linear matrix inequality. In particular, conditions for existence and uniqueness of optimal controls are derived, which are weaker compared to the standard approaches in the literature. We further show that also for the stochastic problem, the optimal control is given in terms of the stabilizing solution of the Lure equation, which generalizes the algebraic Riccati equation.
We establish existence and uniqueness for infinite dimensional Riccati equations taking values in the Banach space L 1 ($mu$ $otimes$ $mu$) for certain signed matrix measures $mu$ which are not necessarily finite. Such equations can be seen as the in finite dimensional analogue of matrix Riccati equations and they appear in the Linear-Quadratic control theory of stochastic Volterra equations.
This paper studies a class of partially observed Linear Quadratic Gaussian (LQG) problems with unknown dynamics. We establish an end-to-end sample complexity bound on learning a robust LQG controller for open-loop stable plants. This is achieved usin g a robust synthesis procedure, where we first estimate a model from a single input-output trajectory of finite length, identify an H-infinity bound on the estimation error, and then design a robust controller using the estimated model and its quantified uncertainty. Our synthesis procedure leverages a recent control tool called Input-Output Parameterization (IOP) that enables robust controller design using convex optimization. For open-loop stable systems, we prove that the LQG performance degrades linearly with respect to the model estimation error using the proposed synthesis procedure. Despite the hidden states in the LQG problem, the achieved scaling matches previous results on learning Linear Quadratic Regulator (LQR) controllers with full state observations.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا