No Arabic abstract
We survey some representative results on fuzzy fractional differential equations, controllability, approximate controllability, optimal control, and optimal feedback control for several different kinds of fractional evolution equations. Optimality and relaxation of multiple control problems, described by nonlinear fractional differential equations with nonlocal control conditions in Banach spaces, are considered.
We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.
In this article, we propose a new unifying framework for the investigation of multi-agent control problems in the mean-field setting. Our approach is based on a new definition of differential inclusions for continuity equations formulated in the Wasserstein spaces of optimal transport. The latter allows to extend several known results of the classical theory of differential inclusions, and to prove an exact correspondence between solutions of differential inclusions and control systems. We show its appropriateness on an example of leader-follower evacuation problem.
We present a probabilistic formulation of risk aware optimal control problems for stochastic differential equations. Risk awareness is in our framework captured by objective functions in which the risk neutral expectation is replaced by a risk function, a nonlinear functional of random variables that account for the controllers risk preferences. We state and prove a risk aware minimum principle that is a parsimonious generalization of the well-known risk neutral, stochastic Pontryagins minimum principle. As our main results we give necessary and also sufficient conditions for optimality of control processes taking values on probability measures defined on a given action space. We show that remarkably, going from the risk neutral to the risk aware case, the minimum principle is simply modified by the introduction of one additional real-valued stochastic process that acts as a risk adjustment factor for given cost rate and terminal cost functions. This adjustment process is explicitly given as the expectation, conditional on the filtration at the given time, of an appropriately defined functional derivative of the risk function evaluated at the random total cost. For our results we rely on the Frechet differentiability of the risk function, and for completeness, we prove under mild assumptions the existence of Frechet derivatives of some common risk functions. We give a simple application of the results for a portfolio allocation problem and show that the risk awareness of the objective function gives rise to a risk premium term that is characterized by the risk adjustment process described above. This suggests uses of our results in e.g. pricing of risk modeled by generic risk functions in financial applications.
We show that any two trajectories of solutions of a one-dimensional fractional differential equation (FDE) either coincide or do not intersect each other. In contrary, in the higher dimensional case, two different trajectories can meet. Furthermore, one-dimensional FDEs and triangular systems of FDEs generate nonlocal fractional dynamical systems, whereas a higher dimensional FDE does, in general, not generate a nonlocal dynamical system.
We deal with the problem of parameter estimation in stochastic differential equations (SDEs) in a partially observed framework. We aim to design a method working for both elliptic and hypoelliptic SDEs, the latters being characterized by degenerate diffusion coefficients. This feature often causes the failure of contrast estimators based on Euler Maruyama discretization scheme and dramatically impairs classic stochastic filtering methods used to reconstruct the unobserved states. All of theses issues make the estimation problem in hypoelliptic SDEs difficult to solve. To overcome this, we construct a well-defined cost function no matter the elliptic nature of the SDEs. We also bypass the filtering step by considering a control theory perspective. The unobserved states are estimated by solving deterministic optimal control problems using numerical methods which do not need strong assumptions on the diffusion coefficient conditioning. Numerical simulations made on different partially observed hypoelliptic SDEs reveal our method produces accurate estimate while dramatically reducing the computational price comparing to other methods.