No Arabic abstract
We study a family of optimal control problems in which one aims at minimizing a cost that mixes a quadratic control penalization and the variance of the system, both for finitely many agents and for the mean-field dynamics as their number goes to infinity. While solutions of the discrete problem always exist in a unique and explicit form, the behavior of their macroscopic counterparts is very sensitive to the magnitude of the time horizon and penalization parameter. When one minimizes the final variance, there always exists a Lipschitz-in-space optimal controls for the infinite dimensional problem, which can be obtained as a suitable extension of the optimal controls for the finite-dimensional problems. The same holds true for variance maximizations whenever the time horizon is sufficiently small. On the contrary, for large final times (or equivalently for small penalizations of the control cost), it can be proven that there does not exist Lipschitz-regular optimal controls for the macroscopic problem.
This paper studies asymptotic solvability of a linear quadratic (LQ) mean field social optimization problem with controlled diffusions and indefinite state and control weights. Starting with an $N$-agent model, we employ a rescaling approach to derive a low-dimensional Riccati ordinary differential equation (ODE) system, which characterizes a necessary and sufficient condition for asymptotic solvability. The decentralized control obtained from the mean field limit ensures a bounded optimality loss in minimizing the social cost having magnitude $O(N)$, which implies an optimality loss of $O(1/N)$ per agent. We further quantify the efficiency gain of the social optimum with respect to the solution of the mean field game.
In this paper we model the role of a government of a large population as a mean field optimal control problem. Such control problems are constrainted by a PDE of continuity-type, governing the dynamics of the probability distribution of the agent population. We show the existence of mean field optimal controls both in the stochastic and deterministic setting. We derive rigorously the first order optimality conditions useful for numerical computation of mean field optimal controls. We introduce a novel approximating hierarchy of sub-optimal controls based on a Boltzmann approach, whose computation requires a very moderate numerical complexity with respect to the one of the optimal control. We provide numerical experiments for models in opinion formation comparing the behavior of the control hierarchy.
We study a multiscale approach for the control of agent-based, two-population models. The control variable acts over one population of leaders, which influence the population of followers via the coupling generated by their interaction. We cast a quadratic optimal control problem for the large-scale microscale model, which is approximated via a Boltzmann approach. By sampling solutions of the optimal control problem associated to binary two-population dynamics, we generate sub-optimal control laws for the kinetic limit of the multi-population model. We present numerical experiments related to opinion dynamics assessing the performance of the proposed control design.
In this article, we provide sufficient conditions under which the controlled vector fields solution of optimal control problems formulated on continuity equations are Lipschitz regular in space. Our approach involves a novel combination of mean-field approximations for infinite-dimensional multi-agent optimal control problems, along with a careful extension of an existence result of locally optimal Lipschitz feedbacks. The latter is based on the reformulation of a coercivity estimate in the language of Wasserstein calculus, which is used to obtain uniform Lipschitz bounds along sequences of approximations by empirical measures.
We propose a mean-field optimal control problem for the parameter identification of a given pattern. The cost functional is based on the Wasserstein distance between the probability measures of the modeled and the desired patterns. The first-order optimality conditions corresponding to the optimal control problem are derived using a Lagrangian approach on the mean-field level. Based on these conditions we propose a gradient descent method to identify relevant parameters such as angle of rotation and force scaling which may be spatially inhomogeneous. We discretize the first-order optimality conditions in order to employ the algorithm on the particle level. Moreover, we prove a rate for the convergence of the controls as the number of particles used for the discretization tends to infinity. Numerical results for the spatially homogeneous case demonstrate the feasibility of the approach.