We consider the stochastic optimal control problem for the dynamical system of the stochastic differential equation driven by a local martingale with a spatial parameter. Assuming the convexity of the control domain, we obtain the stochastic maximum principle as the necessary condition for an optimal control, and we also prove its sufficiency under proper conditions. The stochastic linear quadratic problem in this setting is also discussed.
In this paper we prove necessary conditions for optimality of a stochastic control problem for a class of stochastic partial differential equations that is controlled through the boundary. This kind of problems can be interpreted as a stochastic control problem for an evolution system in an Hilbert space. The regularity of the solution of the adjoint equation, that is a backward stochastic equation in infinite dimension, plays a crucial role in the formulation of the maximum principle.
In this paper, we study a system of stochastic partial differential equations with slow and fast time-scales, where the slow component is a stochastic real Ginzburg-Landau equation and the fast component is a stochastic reaction-diffusion equation, the system is driven by $alpha$-stable process with $alphain (1,2)$. Using the classical Khasminskii approach based on time discretization and the techniques of stopping times, we show that the slow component strong converges to the solution of the corresponding averaged equation under some suitable conditions.
We consider a stochastic fluid queue served by a constant rate server and driven by a process which is the local time of a certain Markov process. Such a stochastic system can be used as a model in a priority service system, especially when the time scales involved are fast. The input (local time) in our model is always singular with respect to the Lebesgue measure which in many applications is ``close to reality. We first discuss how to rigorously construct the (necessarily) unique stationary version of the system under some natural stability conditions. We then consider the distribution of performance steady-state characteristics, namely, the buffer content, the idle period and the busy period. These derivations are much based on the fact that the inverse of the local time of a Markov process is a Levy process (a subordinator) hence making the theory of Levy processes applicable. Another important ingredient in our approach is the Palm calculus coming from the point process point of view.
In this paper we study the optimal stochastic control problem for stochastic differential systems reflected in a domain. The cost functional is a recursive one, which is defined via generalized backward stochastic differential equations developed by Pardoux and Zhang [20]. The value function is shown to be the unique viscosity solution to the associated Hamilton-Jacobi-Bellman equation, which is a fully nonlinear parabolic partial differential equation with a nonlinear Neumann boundary condition. For this, we also prove some new estimates for stochastic differential systems reflected in a domain.
Consider the following stochastic heat equation, begin{align*} frac{partial u_t(x)}{partial t}=- u(-Delta)^{alpha/2} u_t(x)+sigma(u_t(x))dot{F}(t,,x), quad t>0, ; x in R^d. end{align*} Here $- u(-Delta)^{alpha/2}$ is the fractional Laplacian with $ u>0$ and $alpha in (0,2]$, $sigma: Rrightarrow R$ is a globally Lipschitz function, and $dot{F}(t,,x)$ is a Gaussian noise which is white in time and colored in space. Under some suitable additional conditions, we prove a strong comparison theorem and explore the effect of the initial data on the spatial asymptotic properties of the solution. This constitutes an important extension over a series of recent works.