ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal insider control of stochastic partial differential equations

90   0   0.0 ( 0 )
 نشر من قبل Bernt {\\O}ksendal
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study the problem of optimal inside control of an SPDE (a stochastic evolution equation) driven by a Brownian motion and a Poisson random measure. Our optimal control problem is new in two ways: (i) The controller has access to inside information, i.e. access to information about a future state of the system, (ii) The integro-differential operator of the SPDE might depend on the control. In the first part of the paper, we formulate a sufficient and a necessary maximum principle for this type of control problem, in two cases: (1) When the control is allowed to depend both on time t and on the space variable x. (2) When the control is not allowed to depend on x. In the second part of the paper, we apply the results above to the problem of optimal control of an SDE system when the inside controller has only noisy observations of the state of the system. Using results from nonlinear filtering, we transform this noisy observation SDE inside control problem into a full observation SPDE insider control problem. The results are illustrated by explicit examples.



قيم البحث

اقرأ أيضاً

We present a probabilistic formulation of risk aware optimal control problems for stochastic differential equations. Risk awareness is in our framework captured by objective functions in which the risk neutral expectation is replaced by a risk functi on, a nonlinear functional of random variables that account for the controllers risk preferences. We state and prove a risk aware minimum principle that is a parsimonious generalization of the well-known risk neutral, stochastic Pontryagins minimum principle. As our main results we give necessary and also sufficient conditions for optimality of control processes taking values on probability measures defined on a given action space. We show that remarkably, going from the risk neutral to the risk aware case, the minimum principle is simply modified by the introduction of one additional real-valued stochastic process that acts as a risk adjustment factor for given cost rate and terminal cost functions. This adjustment process is explicitly given as the expectation, conditional on the filtration at the given time, of an appropriately defined functional derivative of the risk function evaluated at the random total cost. For our results we rely on the Frechet differentiability of the risk function, and for completeness, we prove under mild assumptions the existence of Frechet derivatives of some common risk functions. We give a simple application of the results for a portfolio allocation problem and show that the risk awareness of the objective function gives rise to a risk premium term that is characterized by the risk adjustment process described above. This suggests uses of our results in e.g. pricing of risk modeled by generic risk functions in financial applications.
We deal with the problem of parameter estimation in stochastic differential equations (SDEs) in a partially observed framework. We aim to design a method working for both elliptic and hypoelliptic SDEs, the latters being characterized by degenerate d iffusion coefficients. This feature often causes the failure of contrast estimators based on Euler Maruyama discretization scheme and dramatically impairs classic stochastic filtering methods used to reconstruct the unobserved states. All of theses issues make the estimation problem in hypoelliptic SDEs difficult to solve. To overcome this, we construct a well-defined cost function no matter the elliptic nature of the SDEs. We also bypass the filtering step by considering a control theory perspective. The unobserved states are estimated by solving deterministic optimal control problems using numerical methods which do not need strong assumptions on the diffusion coefficient conditioning. Numerical simulations made on different partially observed hypoelliptic SDEs reveal our method produces accurate estimate while dramatically reducing the computational price comparing to other methods.
In this paper, we introduce the concept of Developmental Partial Differential Equation (DPDE), which consists of a Partial Differential Equation (PDE) on a time-varying manifold with complete coupling between the PDE and the manifolds evolution. In o ther words, the manifolds evolution depends on the solution to the PDE, and vice versa the differential operator of the PDE depends on the manifolds geometry. DPDE is used to study a diffusion equation with source on a growing surface whose growth depends on the intensity of the diffused quantity. The surface may, for instance, represent the membrane of an egg chamber and the diffused quantity a protein activating a signaling pathway leading to growth. Our main objective is to show controllability of the surface shape using a fixed source with variable intensity for the diffusion. More specifically, we look for a control driving a symmetric manifold shape to any other symmetric shape in a given time interval. For the diffusion we take directly the Laplace-Beltrami operator of the surface, while the surface growth is assumed to be equal to the value of the diffused quantity. We introduce a theoretical framework, provide approximate controllability and show numerical results. Future applications include a specific model for the oogenesis of Drosophila melanogaster.
250 - Shanjian Tang , Zhou Yang 2011
A Dynkin game is considered for stochastic differential equations with random coefficients. We first apply Qiu and Tangs maximum principle for backward stochastic partial differential equations to generalize Krylov estimate for the distribution of a Markov process to that of a non-Markov process, and establish a generalized It^o-Kunita-Wentzells formula allowing the test function to be a random field of It^os type which takes values in a suitable Sobolev space. We then prove the verification theorem that the Nash equilibrium point and the value of the Dynkin game are characterized by the strong solution of the associated Hamilton-Jacobi-Bellman-Isaacs equation, which is currently a backward stochastic partial differential variational inequality (BSPDVI, for short) with two obstacles. We obtain the existence and uniqueness result and a comparison theorem for strong solution of the BSPDVI. Moreover, we study the monotonicity on the strong solution of the BSPDVI by the comparison theorem for BSPDVI and define the free boundaries. Finally, we identify the counterparts for an optimal stopping time problem as a special Dynkin game.
In this article, we propose a new unifying framework for the investigation of multi-agent control problems in the mean-field setting. Our approach is based on a new definition of differential inclusions for continuity equations formulated in the Wass erstein spaces of optimal transport. The latter allows to extend several known results of the classical theory of differential inclusions, and to prove an exact correspondence between solutions of differential inclusions and control systems. We show its appropriateness on an example of leader-follower evacuation problem.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا