ﻻ يوجد ملخص باللغة العربية
We propose an approach for the synthesis of robust and optimal feedback controllers for nonlinear PDEs. Our approach considers the approximation of infinite-dimensional control systems by a pseudospectral collocation method, leading to high-dimensional nonlinear dynamics. For the reduced-order model, we construct a robust feedback control based on the $cH_{infty}$ control method, which requires the solution of an associated high-dimensional Hamilton-Jacobi-Isaacs nonlinear PDE. The dimensionality of the Isaacs PDE is tackled by means of a separable representation of the control system, and a polynomial approximation ansatz for the corresponding value function. Our method proves to be effective for the robust stabilization of nonlinear dynamics up to dimension $dapprox 12$. We assess the robustness and optimality features of our design over a class of nonlinear parabolic PDEs, including nonlinear advection and reaction terms. The proposed design yields a feedback controller achieving optimal stabilization and disturbance rejection properties, along with providing a modelling framework for the robust control of PDEs under parametric uncertainties.
A procedure for the numerical approximation of high-dimensional Hamilton-Jacobi-Bellman (HJB) equations associated to optimal feedback control problems for semilinear parabolic equations is proposed. Its main ingredients are a pseudospectral collocat
A tensor decomposition approach for the solution of high-dimensional, fully nonlinear Hamilton-Jacobi-Bellman equations arising in optimal feedback control of nonlinear dynamics is presented. The method combines a tensor train approximation for the v
Computing optimal feedback controls for nonlinear systems generally requires solving Hamilton-Jacobi-Bellman (HJB) equations, which are notoriously difficult when the state dimension is large. Existing strategies for high-dimensional problems often r
The approximation of solutions to second order Hamilton--Jacobi--Bellman (HJB) equations by deep neural networks is investigated. It is shown that for HJB equations that arise in the context of the optimal control of certain Markov processes the solu
Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we ana