ترغب بنشر مسار تعليمي؟ اضغط هنا

Differential Inclusions in Wasserstein Spaces: The Cauchy-Lipschitz Framework

53   0   0.0 ( 0 )
 نشر من قبل Beno\\^it Bonnet
 تاريخ النشر 2020
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this article, we propose a general framework for the study of differential inclusions in the Wasserstein space of probability measures. Based on earlier geometric insights on the structure of continuity equations, we define solutions of differential inclusions as absolutely continuous curves whose driving velocity fields are measurable selections of multifunction taking their values in the space of vector fields. In this general setting, we prove three of the founding results of the theory of differential inclusions: Filippovs theorem, the Relaxation theorem, and the compactness of the solution sets. These contributions -- which are based on novel estimates on solutions of continuity equations -- are then applied to derive a new existence result for fully non-linear mean-field optimal control problems with closed-loop controls.



قيم البحث

اقرأ أيضاً

82 - Jun Moon , Tamer Basar 2019
We consider two-player zero-sum differential games (ZSDGs), where the state process (dynamical system) depends on the random initial condition and the state processs distribution, and the objective functional includes the state processs distribution and the random target variable. Unlike ZSDGs studied in the existing literature, the ZSDG of this paper introduces a new technical challenge, since the corresponding (lower and upper) value functions are defined on $mathcal{P}_2$ (the set of probability measures with finite second moments) or $mathcal{L}_2$ (the set of random variables with finite second moments), both of which are infinite-dimensional spaces. We show that the (lower and upper) value functions on $mathcal{P}_2$ and $mathcal{L}_2$ are equivalent (law invariant) and continuous, satisfying dynamic programming principles. We use the notion of derivative of a function of probability measures in $mathcal{P}_2$ and its lifted version in $mathcal{L}_2$ to show that the (lower and upper) value functions are unique viscosity solutions to the associated (lower and upper) Hamilton-Jacobi-Isaacs equations that are (infinite-dimensional) first-order PDEs on $mathcal{P}_2$ and $mathcal{L}_2$, where the uniqueness is obtained via the comparison principle. Under the Isaacs condition, we show that the ZSDG has a value.
In this article, we derive first-order necessary optimality conditions for a constrained optimal control problem formulated in the Wasserstein space of probability measures. To this end, we introduce a new notion of localised metric subdifferential f or compactly supported probability measures, and investigate the intrinsic linearised Cauchy problems associated to non-local continuity equations. In particular, we show that when the velocity perturbations belong to the tangent cone to the convexification of the set of admissible velocities, the solutions of these linearised problems are tangent to the solution set of the corresponding continuity inclusion. We then make use of these novel concepts to provide a synthetic and geometric proof of the celebrated Pontryagin Maximum Principle for an optimal control problem with inequality final-point constraints. In addition, we propose sufficient conditions ensuring the normality of the maximum principle.
In this article, we propose a new unifying framework for the investigation of multi-agent control problems in the mean-field setting. Our approach is based on a new definition of differential inclusions for continuity equations formulated in the Wass erstein spaces of optimal transport. The latter allows to extend several known results of the classical theory of differential inclusions, and to prove an exact correspondence between solutions of differential inclusions and control systems. We show its appropriateness on an example of leader-follower evacuation problem.
We consider monotone inclusions defined on a Hilbert space where the operator is given by the sum of a maximal monotone operator $T$ and a single-valued monotone, Lipschitz continuous, and expectation-valued operator $V$. We draw motivation from the seminal work by Attouch and Cabot on relaxed inertial methods for monotone inclusions and present a stochastic extension of the relaxed inertial forward-backward-forward (RISFBF) method. Facilitated by an online variance reduction strategy via a mini-batch approach, we show that (RISFBF) produces a sequence that weakly converges to the solution set. Moreover, it is possible to estimate the rate at which the discrete velocity of the stochastic process vanishes. Under strong monotonicity, we demonstrate strong convergence, and give a detailed assessment of the iteration and oracle complexity of the scheme. When the mini-batch is raised at a geometric (polynomial) rate, the rate statement can be strengthened to a linear (suitable polynomial) rate while the oracle complexity of computing an $epsilon$-solution improves to $O(1/epsilon)$. Importantly, the latter claim allows for possibly biased oracles, a key theoretical advancement allowing for far broader applicability. By defining a restricted gap function based on the Fitzpatrick function, we prove that the expected gap of an averaged sequence diminishes at a sublinear rate of $O(1/k)$ while the oracle complexity of computing a suitably defined $epsilon$-solution is $O(1/epsilon^{1+a})$ where $a>1$. Numerical results on two-stage games and an overlapping group Lasso problem illustrate the advantages of our method compared to stochastic forward-backward-forward (SFBF) and SA schemes.
In this paper, the problem of safe global maximization (it should not be confused with robust optimization) of expensive noisy black-box functions satisfying the Lipschitz condition is considered. The notion safe means that the objective function $f( x)$ during optimization should not violate a safety threshold, for instance, a certain a priori given value $h$ in a maximization problem. Thus, any new function evaluation (possibly corrupted by noise) must be performed at safe points only, namely, at points $y$ for which it is known that the objective function $f(y) > h$. The main difficulty here consists in the fact that the used optimization algorithm should ensure that the safety constraint will be satisfied at a point $y$ before evaluation of $f(y)$ will be executed. Thus, it is required both to determine the safe region $Omega$ within the search domain~$D$ and to find the global maximum within $Omega$. An additional difficulty consists in the fact that these problems should be solved in the presence of the noise. This paper starts with a theoretical study of the problem and it is shown that even though the objective function $f(x)$ satisfies the Lipschitz condition, traditional Lipschitz minorants and majorants cannot be used due to the presence of the noise. Then, a $delta$-Lipschitz framework and two algorithms using it are proposed to solve the safe global maximization problem. The first method determines the safe area within the search domain and the second one executes the global maximization over the found safe region. For both methods a number of theoretical results related to their functioning and convergence is established. Finally, numerical experiments confirming the reliability of the proposed procedures are performed.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا