No Arabic abstract
A dynamical system entrains to a periodic input if its state converges globally to an attractor with the same period. In particular, for a constant input the state converges to a unique equilibrium point for any initial condition. We consider the problem of maximizing a weighted average of the systems output along the periodic attractor. The gain of entrainment is the benefit achieved by using a non-constant periodic input relative to a constant input with the same time average. Such a problem amounts to optimal allocation of resources in a periodic manner. We formulate this problem as a periodic optimal control problem which can be analyzed by means of the Pontryagin maximum principle or solved numerically via powerful software packages. We then apply our framework to a class of occupancy models that appear frequently in biological synthesis systems and other applications. We show that, perhaps surprisingly, constant inputs are optimal for various architectures. This suggests that the presence of non-constant periodic signals, which frequently appear in biological occupancy systems, is a signature of an underlying time-varying objective functional being optimized.
In this effort, a novel operator theoretic framework is developed for data-driven solution of optimal control problems. The developed methods focus on the use of trajectories (i.e., time-series) as the fundamental unit of data for the resolution of optimal control problems in dynamical systems. Trajectory information in the dynamical systems is embedded in a reproducing kernel Hilbert space (RKHS) through what are called occupation kernels. The occupation kernels are tied to the dynamics of the system through the densely defined Liouville operator. The pairing of Liouville operators and occupation kernels allows for lifting of nonlinear finite-dimensional optimal control problems into the space of infinite-dimensional linear programs over RKHSs.
We propose a mean-field optimal control problem for the parameter identification of a given pattern. The cost functional is based on the Wasserstein distance between the probability measures of the modeled and the desired patterns. The first-order optimality conditions corresponding to the optimal control problem are derived using a Lagrangian approach on the mean-field level. Based on these conditions we propose a gradient descent method to identify relevant parameters such as angle of rotation and force scaling which may be spatially inhomogeneous. We discretize the first-order optimality conditions in order to employ the algorithm on the particle level. Moreover, we prove a rate for the convergence of the controls as the number of particles used for the discretization tends to infinity. Numerical results for the spatially homogeneous case demonstrate the feasibility of the approach.
This paper addresses the problem of control synthesis for nonlinear optimal control problems in the presence of state and input constraints. The presented approach relies upon transforming the given problem into an infinite-dimensional linear program over the space of measures. To generate approximations to this infinite-dimensional program, a sequence of Semi-Definite Programs (SDP)s is formulated in the instance of polynomial cost and dynamics with semi-algebraic state and bounded input constraints. A method to extract a polynomial control function from each SDP is also given. This paper proves that the controller synthesized from each of these SDPs generates a sequence of values that converge from below to the value of the optimal control of the original optimal control problem. In contrast to existing approaches, the presented method does not assume that the optimal control is continuous while still proving that the sequence of approximations is optimal. Moreover, the sequence of controllers that are synthesized using the presented approach are proven to converge to the true optimal control. The performance of the presented method is demonstrated on three examples.
Optimization problems governed by Allen-Cahn systems including elastic effects are formulated and first-order necessary optimality conditions are presented. Smooth as well as obstacle potentials are considered, where the latter leads to an MPEC. Numerically, for smooth potential the problem is solved efficiently by the Trust-Region-Newton-Steihaug-cg method. In case of an obstacle potential first numerical results are presented.
We introduce a hybrid (discrete--continuous) safety controller which enforces strict state and input constraints on a system---but only acts when necessary, preserving transparent operation of the original system within some safe region of the state space. We define this space using a Min-Quadratic Barrier function, which we construct along the equilibrium manifold using the Lyapunov functions which result from linear matrix inequality controller synthesis for locally valid uncertain linearizations. We also introduce the concept of a barrier pair, which makes it easy to extend the approach to include trajectory-based augmentations to the safe region, in the style of LQR-Trees. We demonstrate our controller and barrier pair synthesis method in simulation-based examples.