Do you want to publish a course? Click here

Singular Limit of Two Scale Stochastic Optimal Control Problems in Infinite Dimensions by Vanishing Noise Regularization

61   0   0.0 ( 0 )
 Added by Giuseppina Guatteri
 Publication date 2021
  fields
and research's language is English




Ask ChatGPT about the research

In this paper we study the limit of the value function for a two-scale, infinite-dimensional, stochastic controlled system with cylindrical noise and possibly degenerate diffusion. The limit is represented as the value function of a new reduced control problem (on a reduced state space). The presence of a cylindrical noise prevents representation of the limit by viscosity solutions of HJB equations, while degeneracy of diffusion coefficients prevents representation as a classical BSDE. We use a vanishing noise regularization technique.



rate research

Read More

In this paper we study, by probabilistic techniques, the convergence of the value function for a two-scale, infinite-dimensional, stochastic controlled system as the ratio between the two evolution speeds diverges. The value function is represented as the solution of a backward stochastic differential equation (BSDE) that it is shown to converge towards a reduced BSDE. The noise is assumed to be additive both in the slow and the fast equations for the state. Some non degeneracy condition on the slow equation are required. The limit BSDE involves the solution of an ergodic BSDE and is itself interpreted as the value function of an auxiliary stochastic control problem on a reduced state space.
We study a class of infinite-dimensional singular stochastic control problems with applications in economic theory and finance. The control process linearly affects an abstract evolution equation on a suitable partially-ordered infinite-dimensional space X, it takes values in the positive cone of X, and it has right-continuous and nondecreasing paths. We first provide a rigorous formulation of the problem by properly defining the controlled dynamics and integrals with respect to the control process. We then exploit the concave structure of our problem and derive necessary and sufficient first-order conditions for optimality. The latter are finally exploited in a specification of the model where we find an explicit expression of the optimal control. The techniques used are those of semigroup theory, vector-valued integration, convex analysis, and general theory of stochastic processes.
Scenario-based stochastic optimal control problems suffer from the curse of dimensionality as they can easily grow to six and seven figure sizes. First-order methods are suitable as they can deal with such large-scale problems, but may fail to achieve accurate solutions within a reasonable number of iterations. To achieve solutions of higher accuracy and high speed, in this paper we propose two proximal quasi-Newtonian limited-memory algorithms - MinFBE applied to the dual problem and the Newton-type alternating minimization algorithm (NAMA) - which can be massively parallelized on lockstep hardware such as graphics processing units (GPUs). We demonstrate the performance of these methods, in terms of convergence speed and parallelizability, on large-scale problems involving millions of variables.
We derive the explicit solution to a singular stochastic control problem of the monotone follower type with an expected ergodic criterion as well as to its counterpart with a pathwise ergodic criterion. These problems have been motivated by the optimal sustainable exploitation of an ecosystem, such as a natural fishery. Under general assumptions on the diffusion coefficients and the running payoff function, we show that both performance criteria give rise to the same optimal long-term average rate as well as to the same optimal strategy, which is of a threshold type. We solve the two problems by first constructing a suitable solution to their associated Hamilton-Jacobi-Bellman (HJB) equation, which takes the form of a quasi-variational inequality with a gradient constraint.
In this paper we study a class of combined regular and singular stochastic control problems that can be expressed as constrained BSDEs. In the Markovian case, this reduces to a characterization through a PDE with gradient constraint. But the BSDE formulation makes it possible to move beyond Markovian models and consider path-dependent problems. We also provide an approximation of the original control problem with standard BSDEs that yield a characterization of approximately optimal values and controls.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا