ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Semi-Martingale Optimal Transport

393   0   0.0 ( 0 )
 نشر من قبل Wei Ning
 تاريخ النشر 2021
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We propose two deep neural network-based methods for solving semi-martingale optimal transport problems. The first method is based on a relaxation/penalization of the terminal constraint, and is solved using deep neural networks. The second method is based on the dual formulation of the problem, which we express as a saddle point problem, and is solved using adversarial networks. Both methods are mesh-free and therefore mitigate the curse of dimensionality. We test the performance and accuracy of our methods on several examples up to dimension 10. We also apply the first algorithm to a portfolio optimization problem where the goal is, given an initial wealth distribution, to find an investment strategy leading to a prescribed terminal wealth distribution.

قيم البحث

اقرأ أيضاً

We study the problem of bounding path-dependent expectations (within any finite time horizon $d$) over the class of discrete-time martingales whose marginal distributions lie within a prescribed tolerance of a given collection of benchmark marginal d istributions. This problem is a relaxation of the martingale optimal transport (MOT) problem and is motivated by applications to super-hedging in financial markets. We show that the empirical version of our relaxed MOT problem can be approximated within $Oleft( n^{-1/2}right)$ error where $n$ is the number of samples of each of the individual marginal distributions (generated independently) and using a suitably constructed finite-dimensional linear programming problem.
While many questions in (robust) finance can be posed in the martingale optimal transport (MOT) framework, others require to consider also non-linear cost functionals. Following the terminology of Gozlan, Roberto, Samson and Tetali this corresponds t o weak martingale optimal transport (WMOT). In this article we establish stability of WMOT which is important since financial data can give only imprecise information on the underlying marginals. As application, we deduce the stability of the superreplication bound for VIX futures as well as the stability of stretched Brownian motion and we derive a monotonicity principle for WMOT.
We investigate the problem of optimal transport in the so-called Kantorovich form, i.e. given two Radon measures on two compact sets, we seek an optimal transport plan which is another Radon measure on the product of the sets that has these two measu res as marginals and minimizes a certain cost function. We consider quadratic regularization of the problem, which forces the optimal transport plan to be a square integrable function rather than a Radon measure. We derive the dual problem and show strong duality and existence of primal and dual solutions to the regularized problem. Then we derive two algorithms to solve the dual problem of the regularized problem: A Gauss-Seidel method and a semismooth quasi-Newton method and investigate both methods numerically. Our experiments show that the methods perform well even for small regularization parameters. Quadratic regularization is of interest since the resulting optimal transport plans are sparse, i.e. they have a small support (which is not the case for the often used entropic regularization where the optimal transport plan always has full measure).
We consider a given region $Omega$ where the traffic flows according to two regimes: in a region $C$ we have a low congestion, where in the remaining part $Omegasetminus C$ the congestion is higher. The two congestion functions $H_1$ and $H_2$ are gi ven, but the region $C$ has to be determined in an optimal way in order to minimize the total transportation cost. Various penalization terms on $C$ are considered and some numerical computations are shown.
104 - Wuchen Li , Guido Montufar 2018
We study a natural Wasserstein gradient flow on manifolds of probability distributions with discrete sample spaces. We derive the Riemannian structure for the probability simplex from the dynamical formulation of the Wasserstein distance on a weighte d graph. We pull back the geometric structure to the parameter space of any given probability model, which allows us to define a natural gradient flow there. In contrast to the natural Fisher-Rao gradient, the natural Wasserstein gradient incorporates a ground metric on sample space. We illustrate the analysis of elementary exponential family examples and demonstrate an application of the Wasserstein natural gradient to maximum likelihood estimation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا