ترغب بنشر مسار تعليمي؟ اضغط هنا

Singular Limit of BSDEs and Optimal Control of two Scale Stochastic Systems in Infinite Dimensional Spaces

66   0   0.0 ( 0 )
 نشر من قبل Giuseppina Guatteri
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this paper we study, by probabilistic techniques, the convergence of the value function for a two-scale, infinite-dimensional, stochastic controlled system as the ratio between the two evolution speeds diverges. The value function is represented as the solution of a backward stochastic differential equation (BSDE) that it is shown to converge towards a reduced BSDE. The noise is assumed to be additive both in the slow and the fast equations for the state. Some non degeneracy condition on the slow equation are required. The limit BSDE involves the solution of an ergodic BSDE and is itself interpreted as the value function of an auxiliary stochastic control problem on a reduced state space.

قيم البحث

اقرأ أيضاً

In this paper we study the limit of the value function for a two-scale, infinite-dimensional, stochastic controlled system with cylindrical noise and possibly degenerate diffusion. The limit is represented as the value function of a new reduced contr ol problem (on a reduced state space). The presence of a cylindrical noise prevents representation of the limit by viscosity solutions of HJB equations, while degeneracy of diffusion coefficients prevents representation as a classical BSDE. We use a vanishing noise regularization technique.
We study a class of infinite-dimensional singular stochastic control problems with applications in economic theory and finance. The control process linearly affects an abstract evolution equation on a suitable partially-ordered infinite-dimensional s pace X, it takes values in the positive cone of X, and it has right-continuous and nondecreasing paths. We first provide a rigorous formulation of the problem by properly defining the controlled dynamics and integrals with respect to the control process. We then exploit the concave structure of our problem and derive necessary and sufficient first-order conditions for optimality. The latter are finally exploited in a specification of the model where we find an explicit expression of the optimal control. The techniques used are those of semigroup theory, vector-valued integration, convex analysis, and general theory of stochastic processes.
We derive the explicit solution to a singular stochastic control problem of the monotone follower type with an expected ergodic criterion as well as to its counterpart with a pathwise ergodic criterion. These problems have been motivated by the optim al sustainable exploitation of an ecosystem, such as a natural fishery. Under general assumptions on the diffusion coefficients and the running payoff function, we show that both performance criteria give rise to the same optimal long-term average rate as well as to the same optimal strategy, which is of a threshold type. We solve the two problems by first constructing a suitable solution to their associated Hamilton-Jacobi-Bellman (HJB) equation, which takes the form of a quasi-variational inequality with a gradient constraint.
We study the optimal control of path-dependent McKean-Vlasov equations valued in Hilbert spaces motivated by non Markovian mean-field models driven by stochastic PDEs. We first establish the well-posedness of the state equation, and then we prove the dynamic programming principle (DPP) in such a general framework. The crucial law invariance property of the value function V is rigorously obtained, which means that V can be viewed as a function on the Wasserstein space of probability measures on the set of continuous functions valued in Hilbert space. We then define a notion of pathwise measure derivative, which extends the Wasserstein derivative due to Lions [41], and prove a related functional It{^o} formula in the spirit of Dupire [24] and Wu and Zhang [51]. The Master Bellman equation is derived from the DPP by means of a suitable notion of viscosity solution. We provide different formulations and simplifications of such a Bellman equation notably in the special case when there is no dependence on the law of the control.
This paper studies a class of non$-$Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a $Z-$constrained BSDE, wit h dynamics associated to a non singular underlying forward process. Due to the non$-$Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to $Z-$constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non$-$Markovian dynamics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا