ترغب بنشر مسار تعليمي؟ اضغط هنا

A non linear approximation method for solving high dimensional partial differential equations: Application in Finance

151   0   0.0 ( 0 )
 نشر من قبل Jose Arturo Infante Acevedo
 تاريخ النشر 2013
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We study an algorithm which has been proposed by Chinesta et al. to solve high-dimensional partial differential equations. The idea is to represent the solution as a sum of tensor products and to compute iteratively the terms of this sum. This algorithm is related to the so-called greedy algorithm introduced by Temlyakov. In this paper, we investigate the application of the greedy algorithm in finance and more precisely to the option pricing problem. We approximate the solution to the Black-Scholes equation and we propose a variance reduction method. In numerical experiments, we obtain results for up to 10 underlyings. Besides, the proposed variance reduction method permits an important reduction of the variance in comparison with a classical Monte Carlo method.

قيم البحث

اقرأ أيضاً

We identify the stochastic processes associated with one-sided fractional partial differential equations on a bounded domain with various boundary conditions. This is essential for modelling using spatial fractional derivatives. We show well-posednes s of the associated Cauchy problems in $C_0(Omega)$ and $L_1(Omega)$. In order to do so we develop a new method of embedding finite state Markov processes into Feller processes and then show convergence of the respective Feller processes. This also gives a numerical approximation of the solution. The proof of well-posedness closes a gap in many numerical algorithm articles approximating solutions to fractional differential equations that use the Lax-Richtmyer Equivalence Theorem to prove convergence without checking well-posedness.
107 - Quanhui Zhu , Jiang Yang 2021
At present, deep learning based methods are being employed to resolve the computational challenges of high-dimensional partial differential equations (PDEs). But the computation of the high order derivatives of neural networks is costly, and high ord er derivatives lack robustness for training purposes. We propose a novel approach to solving PDEs with high order derivatives by simultaneously approximating the function value and derivatives. We introduce intermediate variables to rewrite the PDEs into a system of low order differential equations as what is done in the local discontinuous Galerkin method. The intermediate variables and the solutions to the PDEs are simultaneously approximated by a multi-output deep neural network. By taking the residual of the system as a loss function, we can optimize the network parameters to approximate the solution. The whole process relies on low order derivatives. Numerous numerical examples are carried out to demonstrate that our local deep learning is efficient, robust, flexible, and is particularly well-suited for high-dimensional PDEs with high order derivatives.
In this work we apply the Deep Galerkin Method (DGM) described in Sirignano and Spiliopoulos (2018) to solve a number of partial differential equations that arise in quantitative finance applications including option pricing, optimal execution, mean field games, etc. The main idea behind DGM is to represent the unknown function of interest using a deep neural network. A key feature of this approach is the fact that, unlike other commonly used numerical approaches such as finite difference methods, it is mesh-free. As such, it does not suffer (as much as other numerical methods) from the curse of dimensionality associated with highdimensional PDEs and PDE systems. The main goals of this paper are to elucidate the features, capabilities and limitations of DGM by analyzing aspects of its implementation for a number of different PDEs and PDE systems. Additionally, we present: (1) a brief overview of PDEs in quantitative finance along with numerical methods for solving them; (2) a brief overview of deep learning and, in particular, the notion of neural networks; (3) a discussion of the theoretical foundations of DGM with a focus on the justification of why this method is expected to perform well.
In this paper, we develop fast procedures for solving linear systems arising from discretization of ordinary and partial differential equations with Caputo fractional derivative w.r.t time variable. First, we consider a finite difference scheme to so lve a two-sided fractional ordinary equation. Furthermore, we present a fast solution technique to accelerate Toeplitz matrix-vector multiplications arising from finite difference discretization. This fast solution technique is based on a fast Fourier transform and depends on the special structure of coefficient matrices, and it helps to reduce the computational work from $O(N^{3})$ required by traditional methods to $O(Nlog^{2}N)$ and the memory requirement from $O(N^{2})$ to $O(N)$ without using any lossy compression, where $N$ is the number of unknowns. Two finite difference schemes to solve time fractional hyperbolic equations with different fractional order $gamma$ are considered. We present a fast solution technique depending on the special structure of coefficient matrices by rearranging the order of unknowns. It helps to reduce the computational work from $O(N^2M)$ required by traditional methods to $O(N$log$^{2}N)$ and the memory requirement from $O(NM)$ to $O(N)$ without using any lossy compression, where $N=tau^{-1}$ and $tau$ is the size of time step, $M=h^{-1}$ and $h$ is the size of space step. Importantly, a fast method is employed to solve the classical time fractional diffusion equation with a lower coast at $O(MN$log$^2N)$, where the direct method requires an overall computational complexity of $O(N^2M)$. Moreover, the applicability and accuracy of the scheme are demonstrated by numerical experiments to support our theoretical analysis.
179 - Christian Beck , Weinan E , 2017
High-dimensional partial differential equations (PDE) appear in a number of models from the financial industry, such as in derivative pricing models, credit valuation adjustment (CVA) models, or portfolio optimization models. The PDEs in such applica tions are high-dimensional as the dimension corresponds to the number of financial assets in a portfolio. Moreover, such PDEs are often fully nonlinear due to the need to incorporate certain nonlinear phenomena in the model such as default risks, transaction costs, volatility uncertainty (Knightian uncertainty), or trading constraints in the model. Such high-dimensional fully nonlinear PDEs are exceedingly difficult to solve as the computational effort for standard approximation methods grows exponentially with the dimension. In this work we propose a new method for solving high-dimensional fully nonlinear second-order PDEs. Our method can in particular be used to sample from high-dimensional nonlinear expectations. The method is based on (i) a connection between fully nonlinear second-order PDEs and second-order backward stochastic differential equations (2BSDEs), (ii) a merged formulation of the PDE and the 2BSDE problem, (iii) a temporal forward discretization of the 2BSDE and a spatial approximation via deep neural nets, and (iv) a stochastic gradient descent-type optimization procedure. Numerical results obtained using ${rm T{small ENSOR}F{small LOW}}$ in ${rm P{small YTHON}}$ illustrate the efficiency and the accuracy of the method in the cases of a $100$-dimensional Black-Scholes-Barenblatt equation, a $100$-dimensional Hamilton-Jacobi-Bellman equation, and a nonlinear expectation of a $ 100 $-dimensional $ G $-Brownian motion.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا