Do you want to publish a course? Click here

Ergodic singular stochastic control motivated by the optimal sustainable exploitation of an ecosystem

77   0   0.0 ( 0 )
 Added by Gechun Liang
 Publication date 2020
  fields
and research's language is English




Ask ChatGPT about the research

We derive the explicit solution to a singular stochastic control problem of the monotone follower type with an expected ergodic criterion as well as to its counterpart with a pathwise ergodic criterion. These problems have been motivated by the optimal sustainable exploitation of an ecosystem, such as a natural fishery. Under general assumptions on the diffusion coefficients and the running payoff function, we show that both performance criteria give rise to the same optimal long-term average rate as well as to the same optimal strategy, which is of a threshold type. We solve the two problems by first constructing a suitable solution to their associated Hamilton-Jacobi-Bellman (HJB) equation, which takes the form of a quasi-variational inequality with a gradient constraint.



rate research

Read More

In this paper we study, by probabilistic techniques, the convergence of the value function for a two-scale, infinite-dimensional, stochastic controlled system as the ratio between the two evolution speeds diverges. The value function is represented as the solution of a backward stochastic differential equation (BSDE) that it is shown to converge towards a reduced BSDE. The noise is assumed to be additive both in the slow and the fast equations for the state. Some non degeneracy condition on the slow equation are required. The limit BSDE involves the solution of an ergodic BSDE and is itself interpreted as the value function of an auxiliary stochastic control problem on a reduced state space.
In this paper we study a Markovian two-dimensional bounded-variation stochastic control problem whose state process consists of a diffusive mean-reverting component and of a purely controlled one. The main problems characteristic lies in the interaction of the two components of the state process: the mean-reversion level of the diffusive component is an affine function of the current value of the purely controlled one. By relying on a combination of techniques from viscosity theory and free-boundary analysis, we provide the structure of the value function and we show that it satisfies a second-order smooth-fit principle. Such a regularity is then exploited in order to determine a system of functional equations solved by the two monotone continuous curves (free boundaries) that split the control problems state space in three connected regions. Further properties of the free boundaries are also obtained.
This paper studies a class of non$-$Markovian singular stochastic control problems, for which we provide a novel probabilistic representation. The solution of such control problem is proved to identify with the solution of a $Z-$constrained BSDE, with dynamics associated to a non singular underlying forward process. Due to the non$-$Markovian environment, our main argumentation relies on the use of comparison arguments for path dependent PDEs. Our representation allows in particular to quantify the regularity of the solution to the singular stochastic control problem in terms of the space and time initial data. Our framework also extends to the consideration of degenerate diffusions, leading to the representation of the solution as the infimum of solutions to $Z-$constrained BSDEs. As an application, we study the utility maximisation problem with transaction costs for non$-$Markovian dynamics.
We establish a generalization of Noether theorem for stochastic optimal control problems. Exploiting the tools of jet bundles and contact geometry, we prove that from any (contact) symmetry of the Hamilton-Jacobi-Bellman equation associated to an optimal control problem it is possible to build a related local martingale. Moreover, we provide an application of the theoretical results to Mertons optimal portfolio problem, showing that this model admits infinitely many conserved quantities in the form of local martingales.
We study the problem of optimally managing an inventory with unknown demand trend. Our formulation leads to a stochastic control problem under partial observation, in which a Brownian motion with non-observable drift can be singularly controlled in both an upward and downward direction. We first derive the equivalent separated problem under full information with state-space components given by the Brownian motion and the filtering estimate of its unknown drift, and we then completely solve the latter. Our approach uses the transition amongst three different but equivalent problem formulations, links between two-dimensional bounded-variation stochastic control problems and games of optimal stopping, and probabilistic methods in combination with refined viscosity theory arguments. We show substantial regularity of (a transformed version of) the value function, we construct an optimal control rule, and we show that the free boundaries delineating (transformed) action and inaction regions are bounded globally Lipschitz continuous functions. To our knowledge this is the first time that such a problem has been solved in the literature.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا