ترغب بنشر مسار تعليمي؟ اضغط هنا

We propose two deep neural network-based methods for solving semi-martingale optimal transport problems. The first method is based on a relaxation/penalization of the terminal constraint, and is solved using deep neural networks. The second method is based on the dual formulation of the problem, which we express as a saddle point problem, and is solved using adversarial networks. Both methods are mesh-free and therefore mitigate the curse of dimensionality. We test the performance and accuracy of our methods on several examples up to dimension 10. We also apply the first algorithm to a portfolio optimization problem where the goal is, given an initial wealth distribution, to find an investment strategy leading to a prescribed terminal wealth distribution.
This paper studies a portfolio allocation problem, where the goal is to prescribe the wealth distribution at the final time. We study this problem with the tools of optimal mass transport. We provide a dual formulation which we solve by a gradient de scent algorithm. This involves solving an associated HJB and Fokker--Planck equation by a finite difference method. Numerical examples for various prescribed terminal distributions are given, showing that we can successfully reach attainable targets. We next consider adding consumption during the investment process, to take into account distribution that either not attainable, or sub-optimal.
In this paper, we develop a deep neural network approach to solve a lifetime expected mortality-weighted utility-based model for optimal consumption in the decumulation phase of a defined contribution pension system. We formulate this problem as a mu lti-period finite-horizon stochastic control problem and train a deep neural network policy representing consumption decisions. The optimal consumption policy is determined by personal information about the retiree such as age, wealth, risk aversion and bequest motive, as well as a series of economic and financial variables including inflation rates and asset returns jointly simulated from a proposed seven-factor economic scenario generator calibrated from market data. We use the Australian pension system as an example, with consideration of the government-funded means-tested Age Pension and other practical aspects such as fund management fees. The key findings from our numerical tests are as follows. First, our deep neural network optimal consumption policy, which adapts to changes in market conditions, outperforms deterministic drawdown rules proposed in the literature. Moreover, the out-of-sample outperformance ratios increase as the number of training iterations increases, eventually reaching outperformance on all testing scenarios after less than 10 minutes of training. Second, a sensitivity analysis is performed to reveal how risk aversion and bequest motives change the consumption over a retirees lifetime under this utility framework. Third, we provide the optimal consumption rate with different starting wealth balances. We observe that optimal consumption rates are not proportional to initial wealth due to the Age Pension payment. Forth, with the same initial wealth balance and utility parameter settings, the optimal consumption level is different between males and females due to gender differences in mortality.
We consider explicit approximations for European put option prices within the Stochastic Verhulst model with time-dependent parameters, where the volatility process follows the dynamics $dV_t = kappa_t (theta_t - V_t) V_t dt + lambda_t V_t dB_t$. Our methodology involves writing the put option price as an expectation of a Black-Scholes formula, reparameterising the volatility process and then performing a number of expansions. The difficulties faced are computing a number of expectations induced by the expansion procedure explicitly. We do this by appealing to techniques from Malliavin calculus. Moreover, we deduce that our methodology extends to models with more general drift and diffusion coefficients for the volatility process. We obtain the explicit representation of the form of the error generated by the expansion procedure, and we provide sufficient ingredients in order to obtain a meaningful bound. Under the assumption of piecewise-constant parameters, our approximation formulas become closed-form, and moreover we are able to establish a fast calibration scheme. Furthermore, we perform a numerical sensitivity analysis to investigate the quality of our approximation formula in the Stochastic Verhulst model, and show that the errors are well within the acceptable range for application purposes.
This paper revisits the problem of computing empirical cumulative distribution functions (ECDF) efficiently on large, multivariate datasets. Computing an ECDF at one evaluation point requires $mathcal{O}(N)$ operations on a dataset composed of $N$ da ta points. Therefore, a direct evaluation of ECDFs at $N$ evaluation points requires a quadratic $mathcal{O}(N^2)$ operations, which is prohibitive for large-scale problems. Two fast and exact methods are proposed and compared. The first one is based on fast summation in lexicographical order, with a $mathcal{O}(N{log}N)$ complexity and requires the evaluation points to lie on a regular grid. The second one is based on the divide-and-conquer principle, with a $mathcal{O}(Nlog(N)^{(d-1){vee}1})$ complexity and requires the evaluation points to coincide with the input points. The two fast algorithms are described and detailed in the general $d$-dimensional case, and numerical experiments validate their speed and accuracy. Secondly, the paper establishes a direct connection between cumulative distribution functions and kernel density estimation (KDE) for a large class of kernels. This connection paves the way for fast exact algorithms for multivariate kernel density estimation and kernel regression. Numerical tests with the Laplacian kernel validate the speed and accuracy of the proposed algorithms. A broad range of large-scale multivariate density estimation, cumulative distribution estimation, survival function estimation and regression problems can benefit from the proposed numerical methods.
This paper addresses the problem of utility maximization under uncertain parameters. In contrast with the classical approach, where the parameters of the model evolve freely within a given range, we constrain them via a penalty function. We show that this robust optimization process can be interpreted as a two-player zero-sum stochastic differential game. We prove that the value function satisfies the Dynamic Programming Principle and that it is the unique viscosity solution of an associated Hamilton-Jacobi-Bellman-Isaacs equation. We test this robust algorithm on real market data. The results show that robust portfolios generally have higher expected utilities and are more stable under strong market downturns. To solve for the value function, we derive an analytical solution in the logarithmic utility case and obtain accurate numerical approximations in the general case by three methods: finite difference method, Monte Carlo simulation, and Generative Adversarial Networks.
We consider closed-form approximations for European put option prices within the Heston and GARCH diffusion stochastic volatility models with time-dependent parameters. Our methodology involves writing the put option price as an expectation of a Blac k-Scholes formula and performing a second-order Taylor expansion around the mean of its argument. The difficulties then faced are simplifying a number of expectations induced by the Taylor expansion. Under the assumption of piecewise-constant parameters, we derive closed-form pricing formulas and devise a fast calibration scheme. Furthermore, we perform a numerical error and sensitivity analysis to investigate the quality of our approximation and show that the errors are well within the acceptable range for application purposes. Lastly, we derive bounds on the remainder term generated by the Taylor expansion.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا