ترغب بنشر مسار تعليمي؟ اضغط هنا

57 - Shige Peng , Shuzhen Yang 2020
Based on law of large numbers and central limit theorem under nonlinear expectation, we introduce a new method of using G-normal distribution to measure financial risks. Applying max-mean estimators and small windows method, we establish autoregressi ve models to determine the parameters of G-normal distribution, i.e., the return, maximal and minimal volatilities of the time series. Utilizing the value at risk (VaR) predictor model under G-normal distribution, we show that the G-VaR model gives an excellent performance in predicting the VaR for a benchmark dataset comparing to many well-known VaR predictors.
In this paper, we aim to solve the high dimensional stochastic optimal control problem from the view of the stochastic maximum principle via deep learning. By introducing the extended Hamiltonian system which is essentially an FBSDE with a maximum co ndition, we reformulate the original control problem as a new one. Three algorithms are proposed to solve the new control problem. Numerical results for different examples demonstrate the effectiveness of our proposed algorithms, especially in high dimensional cases. And an important application of this method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear PDEs.
233 - Shige Peng , Quan Zhou 2019
The G-normal distribution was introduced by Peng [2007] as the limiting distribution in the central limit theorem for sublinear expectation spaces. Equivalently, it can be interpreted as the solution to a stochastic control problem where we have a se quence of random variables, whose variances can be chosen based on all past information. In this note we study the tail behavior of the G-normal distribution through analyzing a nonlinear heat equation. Asymptotic results are provided so that the tail probabilities can be easily evaluated with high accuracy. This study also has a significant impact on the hypothesis testing theory for heteroscedastic data; we show that even if the data are generated under the null hypothesis, it is possible to cheat and attain statistical significance by sequentially manipulating the error variances of the observations.
Recently, the deep learning method has been used for solving forward-backward stochastic differential equations (FBSDEs) and parabolic partial differential equations (PDEs). It has good accuracy and performance for high-dimensional problems. In this paper, we mainly solve fully coupled FBSDEs through deep learning and provide three algorithms. Several numerical results show remarkable performance especially for high-dimensional cases.
Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-$t$ residuals and the extreme value theory-based approach are parti cularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors.
This paper is devoted to studying the properties of the exit times of stochastic differential equations driven by $G$-Brownian motion ($G$-SDEs). In particular, we prove that the exit times of $G$-SDEs has the quasi-continuity property. As an applica tion, we give a probabilistic representation for a large class of fully nonlinear elliptic equations with Dirichlet boundary.
Under the sublinear expectation $mathbb{E}[cdot]:=sup_{thetain Theta} E_theta[cdot]$ for a given set of linear expectations ${E_theta: thetain Theta}$, we establish a new law of large numbers and a new central limit theorem with rate of convergence. We present some interesting special cases and discuss a related statistical inference problem. We also give an approximation and a representation of the $G$-normal distribution, which was used as the limit in Peng (2007)s central limit theorem, in a probability space.
264 - Hanwu Li , Shige Peng 2017
In this paper, we study the reflected backward stochastic differential equation driven by G-Brownian motion (reflected G-BSDE for short) with an upper obstacle. The existence is proved by approximation via penalization. By using a variant comparison theorem, we show that the solution we constructed is the largest one.
169 - Hanwu Li , Shige Peng 2017
In this paper, we study the reflected solutions of one-dimensional backward stochastic differential equations driven by G-Brownian motion (RGBSDE for short). The reflection keeps the solution above a given stochastic process. In order to derive the u niqueness of reflected GBSDEs, we apply a martingale condition instead of the Skorohod condition. Similar to the classical case, we prove the existence by approximation via penalization.
The objective of this paper is to establish the decomposition theorem for supermartingales under the $G$-framework. We first introduce a $g$-nonlinear expectation via a kind of $G$-BSDE and the associated supermartingales. We have shown that this kin d of supermartingales have the decomposition similar to the classical case. The main ideas are to apply the uniformly continuous property of $S_G^beta(0,T)$, the representation of the solution to $G$-BSDE and the approximation method via penalization.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا