ترغب بنشر مسار تعليمي؟ اضغط هنا

Mean-Field Optimal Control of Continuity Equations and Differential Inclusions

212   0   0.0 ( 0 )
 نشر من قبل Beno\\^it Bonnet
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article, we propose a new unifying framework for the investigation of multi-agent control problems in the mean-field setting. Our approach is based on a new definition of differential inclusions for continuity equations formulated in the Wasserstein spaces of optimal transport. The latter allows to extend several known results of the classical theory of differential inclusions, and to prove an exact correspondence between solutions of differential inclusions and control systems. We show its appropriateness on an example of leader-follower evacuation problem.



قيم البحث

اقرأ أيضاً

We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
A mean-field selective optimal control problem of multipopulation dynamics via transient leadership is considered. The agents in the system are described by their spatial position and their probability of belonging to a certain population. The dynami cs in the control problem is characterized by the presence of an activation function which tunes the control on each agent according to the membership to a population, which, in turn, evolves according to a Markov-type jump process. This way, a hypothetical policy maker can select a restricted pool of agents to act upon based, for instance, on their time-dependent influence on the rest of the population. A finite-particle control problem is studied and its mean-field limit is identified via $Gamma$-convergence, ensuring convergence of optimal controls. The dynamics of the mean-field optimal control is governed by a continuity-type equation without diffusion. Specific applications in the context of opinion dynamics are discussed with some numerical experiments.
We propose a mean-field optimal control problem for the parameter identification of a given pattern. The cost functional is based on the Wasserstein distance between the probability measures of the modeled and the desired patterns. The first-order op timality conditions corresponding to the optimal control problem are derived using a Lagrangian approach on the mean-field level. Based on these conditions we propose a gradient descent method to identify relevant parameters such as angle of rotation and force scaling which may be spatially inhomogeneous. We discretize the first-order optimality conditions in order to employ the algorithm on the particle level. Moreover, we prove a rate for the convergence of the controls as the number of particles used for the discretization tends to infinity. Numerical results for the spatially homogeneous case demonstrate the feasibility of the approach.
We present a probabilistic formulation of risk aware optimal control problems for stochastic differential equations. Risk awareness is in our framework captured by objective functions in which the risk neutral expectation is replaced by a risk functi on, a nonlinear functional of random variables that account for the controllers risk preferences. We state and prove a risk aware minimum principle that is a parsimonious generalization of the well-known risk neutral, stochastic Pontryagins minimum principle. As our main results we give necessary and also sufficient conditions for optimality of control processes taking values on probability measures defined on a given action space. We show that remarkably, going from the risk neutral to the risk aware case, the minimum principle is simply modified by the introduction of one additional real-valued stochastic process that acts as a risk adjustment factor for given cost rate and terminal cost functions. This adjustment process is explicitly given as the expectation, conditional on the filtration at the given time, of an appropriately defined functional derivative of the risk function evaluated at the random total cost. For our results we rely on the Frechet differentiability of the risk function, and for completeness, we prove under mild assumptions the existence of Frechet derivatives of some common risk functions. We give a simple application of the results for a portfolio allocation problem and show that the risk awareness of the objective function gives rise to a risk premium term that is characterized by the risk adjustment process described above. This suggests uses of our results in e.g. pricing of risk modeled by generic risk functions in financial applications.
In this article, we investigate some of the fine properties of the value function associated to an optimal control problem in the Wasserstein space of probability measures. Building on new interpolation and linearisation formulas for non-local flows, we prove semiconcavity estimates for the value function, and establish several variants of the so-called sensitivity relations which provide connections between its superdifferential and the adjoint curves stemming from the maximum principle. We subsequently make use of these results to study the propagation of regularity for the value function along optimal trajectories, as well as to investigate sufficient optimality conditions and optimal feedbacks for mean-field optimal control problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا