ﻻ يوجد ملخص باللغة العربية
A formal approach to rephrase nonlinear filtering of stochastic differential equations is the Kushner setting in applied mathematics and dynamical systems. Thanks to the ability of the Carleman linearization, the nonlinear stochastic differential equation can be equivalently expressed as a finite system of bilinear stochastic differential equations with the augmented state under the finite closure. Interestingly, the novelty of this paper is to embed the Carleman linearization into a stochastic evolution of the Markov process. To illustrate the Carleman linearization of the Markov process, this paper embeds the Carleman linearization into a nonlinear swing stochastic differential equation. Furthermore, we achieve the nonlinear swing equation filtering in the Carleman setting. Filtering in the Carleman setting has simplified algorithmic procedure. The concerning augmented state accounts for the nonlinearity as well as stochasticity. We show that filtering of the nonlinear stochastic swing equation in the Carleman framework is more refined as well as sharper in contrast to benchmark nonlinear EKF. This paper suggests the usefulness of the Carleman embedding into the stochastic differential equation to filter the concerning nonlinear stochastic differential system. This paper will be of interest to nonlinear stochastic dynamists exploring and unfolding linearization embedding techniques to their research.
In this paper we study a Markovian two-dimensional bounded-variation stochastic control problem whose state process consists of a diffusive mean-reverting component and of a purely controlled one. The main problems characteristic lies in the interact
The G-normal distribution was introduced by Peng [2007] as the limiting distribution in the central limit theorem for sublinear expectation spaces. Equivalently, it can be interpreted as the solution to a stochastic control problem where we have a se
We present a dynamical system framework for understanding Nesterovs accelerated gradient method. In contrast to earlier work, our derivation does not rely on a vanishing step size argument. We show that Nesterov acceleration arises from discretizing
A multiplicative relative value iteration algorithm for solving the dynamic programming equation for the risk-sensitive control problem is studied for discrete time controlled Markov chains with a compact Polish state space, and controlled diffusions
In this paper, we briefly review the development of ranking-and-selection (R&S) in the past 70 years, especially the theoretical achievements and practical applications in the last 20 years. Different from the frequentist and Bayesian classifications