Do you want to publish a course? Click here

Discrete-Time Linear-Quadratic Regulation via Optimal Transport

73   0   0.0 ( 0 )
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

In this paper, we consider a discrete-time stochastic control problem with uncertain initial and target states. We first discuss the connection between optimal transport and stochastic control problems of this form. Next, we formulate a linear-quadratic regulator problem where the initial and terminal states are distributed according to specified probability densities. A closed-form solution for the optimal transport map in the case of linear-time varying systems is derived, along with an algorithm for computing the optimal map. Two numerical examples pertaining to swarm deployment demonstrate the practical applicability of the model, and performance of the numerical method.



rate research

Read More

The linear-quadratic regulator (LQR) is an efficient control method for linear and linearized systems. Typically, LQR is implemented in minimal coordinates (also called generalized or joint coordinates). However, other coordinates are possible and recent research suggests that there may be numerical and control-theoretic advantages when using higher-dimensional non-minimal state parameterizations for dynamical systems. One such parameterization is maximal coordinates, in which each link in a multi-body system is parameterized by its full six degrees of freedom and joints between links are modeled with algebraic constraints. Such constraints can also represent closed kinematic loops or contact with the environment. This paper investigates the difference between minimal- and maximal-coordinate LQR control laws. A case study of applying LQR to a simple pendulum and simulations comparing the basins of attraction and tracking performance of minimal- and maximal-coordinate LQR controllers suggest that maximal-coordinate LQR achieves greater robustness and improved tracking performance compared to minimal-coordinate LQR when applied to nonlinear systems.
95 - Jingrui Sun , Zhen Wu , Jie Xiong 2021
This paper is concerned with a backward stochastic linear-quadratic (LQ, for short) optimal control problem with deterministic coefficients. The weighting matrices are allowed to be indefinite, and cross-product terms in the control and state processes are present in the cost functional. Based on a Hilbert space method, necessary and sufficient conditions are derived for the solvability of the problem, and a general approach for constructing optimal controls is developed. The crucial step in this construction is to establish the solvability of a Riccati-type equation, which is accomplished under a fairly weak condition by investigating the connection with forward stochastic LQ optimal control problems.
104 - Wuchen Li , Guido Montufar 2018
We study a natural Wasserstein gradient flow on manifolds of probability distributions with discrete sample spaces. We derive the Riemannian structure for the probability simplex from the dynamical formulation of the Wasserstein distance on a weighted graph. We pull back the geometric structure to the parameter space of any given probability model, which allows us to define a natural gradient flow there. In contrast to the natural Fisher-Rao gradient, the natural Wasserstein gradient incorporates a ground metric on sample space. We illustrate the analysis of elementary exponential family examples and demonstrate an application of the Wasserstein natural gradient to maximum likelihood estimation.
This paper is concerned with the distributed linear quadratic optimal control problem. In particular, we consider a suboptimal version of the distributed optimal control problem for undirected multi-agent networks. Given a multi-agent system with identical agent dynamics and an associated global quadratic cost functional, our objective is to design suboptimal distributed control laws that guarantee the controlled network to reach consensus and the associated cost to be smaller than an a priori given upper bound. We first analyze the suboptimality for a given linear system and then apply the results to linear multiagent systems. Two design methods are then provided to compute such suboptimal distributed controllers, involving the solution of a single Riccati inequality of dimension equal to the dimension of the agent dynamics, and the smallest nonzero and the largest eigenvalue of the graph Laplacian. Furthermore, we relax the requirement of exact knowledge of the smallest nonzero and largest eigenvalue of the graph Laplacian by using only lower and upper bounds on these eigenvalues. Finally, a simulation example is provided to illustrate our design method.
In most real cases transition probabilities between operational modes of Markov jump linear systems cannot be computed exactly and are time-varying. We take into account this aspect by considering Markov jump linear systems where the underlying Markov chain is polytopic and time-inhomogeneous, i.e. its transition probability matrix is varying over time, with variations that are arbitrary within a polytopic set of stochastic matrices. We address and solve for this class of systems the infinite-horizon optimal control problem. In particular, we show that the optimal controller can be obtained from a set of coupled algebraic Riccati equations, and that for mean square stabilizable systems the optimal finite-horizon cost corresponding to the solution to a parsimonious set of coupled difference Riccati equations converges exponentially fast to the optimal infinite-horizon cost related to the set of coupled algebraic Riccati equations. All the presented concepts are illustrated on a numerical example showing the efficiency of the provided solution.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا