No Arabic abstract
A formal approach to rephrase nonlinear filtering of stochastic differential equations is the Kushner setting in applied mathematics and dynamical systems. Thanks to the ability of the Carleman linearization, the nonlinear stochastic differential equation can be equivalently expressed as a finite system of bilinear stochastic differential equations with the augmented state under the finite closure. Interestingly, the novelty of this paper is to embed the Carleman linearization into a stochastic evolution of the Markov process. To illustrate the Carleman linearization of the Markov process, this paper embeds the Carleman linearization into a nonlinear swing stochastic differential equation. Furthermore, we achieve the nonlinear swing equation filtering in the Carleman setting. Filtering in the Carleman setting has simplified algorithmic procedure. The concerning augmented state accounts for the nonlinearity as well as stochasticity. We show that filtering of the nonlinear stochastic swing equation in the Carleman framework is more refined as well as sharper in contrast to benchmark nonlinear EKF. This paper suggests the usefulness of the Carleman embedding into the stochastic differential equation to filter the concerning nonlinear stochastic differential system. This paper will be of interest to nonlinear stochastic dynamists exploring and unfolding linearization embedding techniques to their research.
In this paper we study a Markovian two-dimensional bounded-variation stochastic control problem whose state process consists of a diffusive mean-reverting component and of a purely controlled one. The main problems characteristic lies in the interaction of the two components of the state process: the mean-reversion level of the diffusive component is an affine function of the current value of the purely controlled one. By relying on a combination of techniques from viscosity theory and free-boundary analysis, we provide the structure of the value function and we show that it satisfies a second-order smooth-fit principle. Such a regularity is then exploited in order to determine a system of functional equations solved by the two monotone continuous curves (free boundaries) that split the control problems state space in three connected regions. Further properties of the free boundaries are also obtained.
The G-normal distribution was introduced by Peng [2007] as the limiting distribution in the central limit theorem for sublinear expectation spaces. Equivalently, it can be interpreted as the solution to a stochastic control problem where we have a sequence of random variables, whose variances can be chosen based on all past information. In this note we study the tail behavior of the G-normal distribution through analyzing a nonlinear heat equation. Asymptotic results are provided so that the tail probabilities can be easily evaluated with high accuracy. This study also has a significant impact on the hypothesis testing theory for heteroscedastic data; we show that even if the data are generated under the null hypothesis, it is possible to cheat and attain statistical significance by sequentially manipulating the error variances of the observations.
We present a dynamical system framework for understanding Nesterovs accelerated gradient method. In contrast to earlier work, our derivation does not rely on a vanishing step size argument. We show that Nesterov acceleration arises from discretizing an ordinary differential equation with a semi-implicit Euler integration scheme. We analyze both the underlying differential equation as well as the discretization to obtain insights into the phenomenon of acceleration. The analysis suggests that a curvature-dependent damping term lies at the heart of the phenomenon. We further establish connections between the discretized and the continuous-time dynamics.
A multiplicative relative value iteration algorithm for solving the dynamic programming equation for the risk-sensitive control problem is studied for discrete time controlled Markov chains with a compact Polish state space, and controlled diffusions in on the whole Euclidean space. The main result is a proof of convergence to the desired limit in each case.
In this paper, we briefly review the development of ranking-and-selection (R&S) in the past 70 years, especially the theoretical achievements and practical applications in the last 20 years. Different from the frequentist and Bayesian classifications adopted by Kim and Nelson (2006b) and Chick (2006) in their review articles, we categorize existing R&S procedures into fixed-precision and fixed-budget procedures, as in Hunter and Nelson (2017). We show that these two categories of procedures essentially differ in the underlying methodological formulations, i.e., they are built on hypothesis testing and dynamic-programming, respectively. In light of this variation, we review in detail some well-known procedures in the literature and show how they fit into these two formulations. In addition, we discuss the use of R&S procedures in solving various practical problems and propose what we think are the important research questions in the field.