ترغب بنشر مسار تعليمي؟ اضغط هنا

On a class of path-dependent singular stochastic control problems

87   0   0.0 ( 0 )
 نشر من قبل Dylan Possama\\\"i
 تاريخ النشر 2017
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper studies a class of non$-$Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a $Z-$constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non$-$Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to $Z-$constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non$-$Markovian dynamics.

قيم البحث

اقرأ أيضاً

We study a class of infinite-dimensional singular stochastic control problems with applications in economic theory and finance. The control process linearly affects an abstract evolution equation on a suitable partially-ordered infinite-dimensional s pace X, it takes values in the positive cone of X, and it has right-continuous and nondecreasing paths. We first provide a rigorous formulation of the problem by properly defining the controlled dynamics and integrals with respect to the control process. We then exploit the concave structure of our problem and derive necessary and sufficient first-order conditions for optimality. The latter are finally exploited in a specification of the model where we find an explicit expression of the optimal control. The techniques used are those of semigroup theory, vector-valued integration, convex analysis, and general theory of stochastic processes.
We establish a generalization of Noether theorem for stochastic optimal control problems. Exploiting the tools of jet bundles and contact geometry, we prove that from any (contact) symmetry of the Hamilton-Jacobi-Bellman equation associated to an opt imal control problem it is possible to build a related local martingale. Moreover, we provide an application of the theoretical results to Mertons optimal portfolio problem, showing that this model admits infinitely many conserved quantities in the form of local martingales.
We study the problem of optimally managing an inventory with unknown demand trend. Our formulation leads to a stochastic control problem under partial observation, in which a Brownian motion with non-observable drift can be singularly controlled in b oth an upward and downward direction. We first derive the equivalent separated problem under full information with state-space components given by the Brownian motion and the filtering estimate of its unknown drift, and we then completely solve the latter. Our approach uses the transition amongst three different but equivalent problem formulations, links between two-dimensional bounded-variation stochastic control problems and games of optimal stopping, and probabilistic methods in combination with refined viscosity theory arguments. We show substantial regularity of (a transformed version of) the value function, we construct an optimal control rule, and we show that the free boundaries delineating (transformed) action and inaction regions are bounded globally Lipschitz continuous functions. To our knowledge this is the first time that such a problem has been solved in the literature.
In this paper we study a Markovian two-dimensional bounded-variation stochastic control problem whose state process consists of a diffusive mean-reverting component and of a purely controlled one. The main problems characteristic lies in the interact ion of the two components of the state process: the mean-reversion level of the diffusive component is an affine function of the current value of the purely controlled one. By relying on a combination of techniques from viscosity theory and free-boundary analysis, we provide the structure of the value function and we show that it satisfies a second-order smooth-fit principle. Such a regularity is then exploited in order to determine a system of functional equations solved by the two monotone continuous curves (free boundaries) that split the control problems state space in three connected regions. Further properties of the free boundaries are also obtained.
In this paper we study a class of combined regular and singular stochastic control problems that can be expressed as constrained BSDEs. In the Markovian case, this reduces to a characterization through a PDE with gradient constraint. But the BSDE for mulation makes it possible to move beyond Markovian models and consider path-dependent problems. We also provide an approximation of the original control problem with standard BSDEs that yield a characterization of approximately optimal values and controls.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا