ترغب بنشر مسار تعليمي؟ اضغط هنا

We present a novel model-order reduction (MOR) method for linear time-invariant systems that preserves passivity and is thus suited for structure-preserving MOR for port-Hamiltonian (pH) systems. Our algorithm exploits the well-known spectral factori zation of the Popov function by a solution of the Kalman-Yakubovich-Popov (KYP) inequality. It performs MOR directly on the spectral factor inheriting the original systems sparsity enabling MOR in a large-scale context. Our analysis reveals that the spectral factorization corresponding to the minimal solution of an associated algebraic Riccati equation is preferable from a model reduction perspective and benefits pH-preserving MOR methods such as a modified version of the iterative rational Krylov algorithm (IRKA). Numerical examples demonstrate that our approach can produce high-fidelity reduced-order models close to (unstructured) $mathcal{H}_2$-optimal reduced-order models.
We study the convergence to equilibrium of an underdamped Langevin equation that is controlled by a linear feedback force. Specifically, we are interested in sampling the possibly multimodal invariant probability distribution of a Langevin system at small noise (or low temperature), for which the dynamics can easily get trapped inside metastable subsets of the phase space. We follow [Chen et al., J. Math. Phys. 56, 113302, 2015] and consider a Langevin equation that is simulated at a high temperature, with the control playing the role of a friction that balances the additional noise so as to restore the original invariant measure at a lower temperature. We discuss different limits as the temperature ratio goes to infinity and prove convergence to a limit dynamics. It turns out that, depending on whether the lower (target) or the higher (simulation) temperature is fixed, the controlled dynamics converges either to the overdamped Langevin equation or to a deterministic gradient flow. This implies that (a) the ergodic limit and the large temperature separation limit do not commute in general, and that (b) it is not possible to accelerate the speed of convergence to the ergodic limit by making the temperature separation larger and larger. We discuss the implications of these observation from the perspective of stochastic optimisation algorithms and enhanced sampling schemes in molecular dynamics.
We study linear quadratic Gaussian (LQG) control design for linear port-Hamiltonian systems. To this end, we exploit the freedom in choosing the weighting matrices and propose a specific choice which leads to an LQG controller which is port-Hamiltoni an and, thus, in particular stable and passive. Furthermore, we construct a reduced-order controller via balancing and subsequent truncation. This approach is closely related to classical LQG balanced truncation and shares a similar a priori error bound with respect to the gap metric. By exploiting the non-uniqueness of the Hamiltonian, we are able to determine an optimal pH representation of the full-order system in the sense that the error bound is minimized. In addition, we discuss consequences for pH-preserving balanced truncation model reduction which results in two different classical H-infinity-error bounds. Finally, we illustrate the theoretical findings by means of two numerical examples.
Nonlinear observers based on the well-known concept of minimum energy estimation are discussed. The approach relies on an output injection operator determined by a Hamilton-Jacobi-Bellman equation and is subsequently approximated by a neural network. A suitable optimization problem allowing to learn the network parameters is proposed and numerically investigated for linear and nonlinear oscillators.
Differential algebraic Riccati equations are at the heart of many applications in control theory. They are time-depent, matrix-valued, and in particular nonlinear equations that require special methods for their solution. Low-rank methods have been u sed heavily computing a low-rank solution at every step of a time-discretization. We propose the use of an all-at-once space-time solution leading to a large nonlinear space-time problem for which we propose the use of a Newton-Kleinman iteration. Approximating the space-time problem in low-rank form requires fewer applications of the discretized differential operator and gives a low-rank approximation to the overall solution.
We consider a nonlinear reaction diffusion system of parabolic type known as the monodomain equations, which model the interaction of the electric current in a cell. Together with the FitzHugh-Nagumo model for the nonlinearity they represent defibril lation processes of the human heart. We study a fairly general type with co-located inputs and outputs describing both boundary and distributed control and observation. The control objective is output trajectory tracking with prescribed performance. To achieve this we employ the funnel controller, which is model-free and of low complexity. The controller introduces a nonlinear and time-varying term in the closed-loop system, for which we prove existence and uniqueness of solutions. Additionally, exploiting the parabolic nature of the problem, we obtain Holder continuity of the state, inputs and outputs. We illustrate our results by a simulation of a standard test example for the termination of reentry waves.
We formulate here an approach to model reduction that is well-suited for linear time-invariant control systems that are stabilizable and detectable but may otherwise be unstable. We introduce a modified $mathcal{H}_2$-error metric, the $mathcal{H}_2$ -gap, that provides an effective measure of model fidelity in this setting. While the direct evaluation of the $mathcal{H}_2$-gap requires the solutions of a pair of algebraic Riccati equations associated with related closed-loop systems, we are able to work entirely within an interpolatory framework, developing algorithms and supporting analysis that do not reference full-order closed-loop Gramians. This leads to a computationally effective strategy yielding reduced models designed so that the corresponding reduced closed-loop systems will interpolate the full-order closed-loop system at specially adapted interpolation points, without requiring evaluation of the full-order closed-loop system nor even computation of the feedback law that determines it. The analytical framework and computational algorithm presented here provides an effective new approach toward constructing reduced-order models for unstable systems. Numerical examples for an unstable convection diffusion equation and a linearized incompressible Navier-Stokes equation illustrate the effectiveness of this approach.
The approximation of the value function associated to a stabilization problem formulated as optimal control problem for the Navier-Stokes equations in dimension three by means of solutions to generalized Lyapunov equations is proposed and analyzed. T he specificity, that the value function is not differentiable on the state space must be overcome. For this purpose a new class of generalized Lyapunov equations is introduced. Existence of unique solutions to these equations is demonstrated. They provide the basis for feedback operators, which approximate the value function, the optimal states and controls, up to arbitrary order.
The value function associated with an optimal control problem subject to the Navier-Stokes equations in dimension two is analyzed. Its smoothness is established around a steady state, moreover, its derivatives are shown to satisfy a Riccati equation at the order two and generalized Lyapunov equations at the higher orders. An approximation of the optimal feedback law is then derived from the Taylor expansion of the value function. A convergence rate for the resulting controls and closed-loop systems is demonstrated.
Optimal control problems with a very large time horizon can be tackled with the Receding Horizon Control (RHC) method, which consists in solving a sequence of optimal control problems with small prediction horizon. The main result of this article is the proof of the exponential convergence (with respect to the prediction horizon) of the control generated by the RHC method towards the exact solution of the problem. The result is established for a class of infinite-dimensional linear-quadratic optimal control problems with time-independent dynamics and integral cost. Such problems satisfy the turnpike property: the optimal trajectory remains most of the time very close to the solution to the associated static optimization problem. Specific terminal cost functions, derived from the Lagrange multiplier associated with the static optimization problem, are employed in the implementation of the RHC method.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا