ترغب بنشر مسار تعليمي؟ اضغط هنا

We show that every $mathbb{R}^d$-valued Sobolev path with regularity $alpha$ and integrability $p$ can be lifted to a Sobolev rough path provided $alpha < 1/p<1/3$. The novelty of our approach is its use of ideas underlying Hairers reconstruction the orem generalized to a framework allowing for Sobolev models and Sobolev modelled distributions. Moreover, we show that the corresponding lifting map is locally Lipschitz continuous with respect to the inhomogeneous Sobolev metric.
The Kyle model describes how an equilibrium of order sizes and security prices naturally arises between a trader with insider information and the price providing market maker as they interact through a series of auctions. Ever since being introduced by Albert S. Kyle in 1985, the model has become important in the study of market microstructure models with asymmetric information. As it is well understood, it serves as an excellent opportunity to study how modern deep learning technology can be used to replicate and better understand equilibria that occur in certain market learning problems. We model the agents in Kyles single period setting using deep neural networks. The networks are trained by interacting following the rules and objectives as defined by Kyle. We show how the right network architectures and training methods lead to the agents behaviour converging to the theoretical equilibrium that is predicted by Kyles model.
In 2002, Benjamin Jourdain and Claude Martini discovered that for a class of payoff functions, the pricing problem for American options can be reduced to pricing of European options for an appropriately associated payoff, all within a Black-Scholes f ramework. This discovery has been investigated in great detail by Soren Christensen, Jan Kallsen and Matthias Lenga in a recent work in 2020. In the present work we prove that this phenomenon can be observed in a wider context, and even holds true in a setup of non-linear stochastic processes. We analyse this problem from both probabilistic and analytic viewpoints. In the classical situation, Jourdain and Martini used this method to approximate prices of American put options. The broader applicability now potentially covers non-linear frameworks such as model uncertainty and controller-and-stopper-games.
Consistent Recalibration models (CRC) have been introduced to capture in necessary generality the dynamic features of term structures of derivatives prices. Several approaches have been suggested to tackle this problem, but all of them, including CRC models, suffered from numerical intractabilities mainly due to the presence of complicated drift terms or consistency conditions. We overcome this problem by machine learning techniques, which allow to store the crucial drift terms information in neural network type functions. This yields first time dynamic term structure models which can be efficiently simulated.
Combinations of neural ODEs with recurrent neural networks (RNN), like GRU-ODE-Bayes or ODE-RNN are well suited to model irregularly observed time series. While those models outperform existing discrete-time approaches, no theoretical guarantees for their predictive capabilities are available. Assuming that the irregularly-sampled time series data originates from a continuous stochastic process, the $L^2$-optimal online prediction is the conditional expectation given the currently available information. We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process. Our approach models the conditional expectation between two observations with a neural ODE and jumps whenever a new observation is made. We define a novel training framework, which allows us to prove theoretical guarantees for the first time. In particular, we show that the output of our model converges to the $L^2$-optimal prediction. This can be interpreted as solution to a special filtering problem. We provide experiments showing that the theoretical results also hold empirically. Moreover, we experimentally show that our model outperforms the baselines in more complex learning tasks and give comparisons on real-world datasets.
We introduce the space of rough paths with Sobolev regularity and the corresponding concept of controlled Sobolev paths. Based on these notions, we study rough path integration and rough differential equations. As main result, we prove that the solut ion map associated to differential equations driven by rough paths is a locally Lipschitz continuous map on the Sobolev rough path space for any arbitrary low regularity $alpha$ and integrability $p$ provided $alpha >1/p$.
We propose a fully data-driven approach to calibrate local stochastic volatility (LSV) models, circumventing in particular the ad hoc interpolation of the volatility surface. To achieve this, we parametrize the leverage function by a family of feed-f orward neural networks and learn their parameters directly from the available market option prices. This should be seen in the context of neural SDEs and (causal) generative adversarial networks: we generate volatility surfaces by specific neural SDEs, whose quality is assessed by quantifying, possibly in an adversarial manner, distances to market prices. The minimization of the calibration functional relies strongly on a variance reduction technique based on hedging and deep hedging, which is interesting in its own right: it allows the calculation of model prices and model implied volatilities in an accurate way using only small sets of sample paths. For numerical illustration we implement a SABR-type LSV model and conduct a thorough statistical performance analysis on many samples of implied volatility smiles, showing the accuracy and stability of the method.
We estimate the Lipschitz constants of the gradient of a deep neural network and the network itself with respect to the full set of parameters. We first develop estimates for a deep feed-forward densely connected network and then, in a more general f ramework, for all neural networks that can be represented as solutions of controlled ordinary differential equations, where time appears as continuous depth. These estimates can be used to set the step size of stochastic gradient descent methods, which is illustrated for one example method.
This article introduces a new mathematical concept of illiquidity that goes hand in hand with credit risk. The concept is not volume- but constraint-based, i.e., certain assets cannot be shorted and are ineligible as numeraire. If those assets are st ill chosen as numeraire, we arrive at a two-price economy. We utilise Jarrow & Turnbulls foreign exchange analogy that interprets defaultable zero-coupon bonds as a conversion of non-defaultable foreign counterparts. In the language of structured derivatives, the impact of credit risk is disabled through quanto-ing. In a similar fashion, we look at bond prices as if perfect liquidity was given. This corresponds to asset pricing with respect to an ineligible numeraire and necessitates Follmer measures.
Today, various forms of neural networks are trained to perform approximation tasks in many fields. However, the estimates obtained are not fully understood on function space. Empirical results suggest that typical training algorithms favor regularize d solutions. These observations motivate us to analyze properties of the neural networks found by gradient descent initialized close to zero, that is frequently employed to perform the training task. As a starting point, we consider one dimensional (shallow) ReLU neural networks in which weights are chosen randomly and only the terminal layer is trained. First, we rigorously show that for such networks ridge regularized regression corresponds in function space to regularizing the estimates second derivative for fairly general loss functionals. For least squares regression, we show that the trained network converges to the smooth spline interpolation of the training data as the number of hidden nodes tends to infinity. Moreover, we derive a correspondence between the early stopped gradient descent and the smoothing spline regression. Our analysis might give valuable insight on the properties of the solutions obtained using gradient descent methods in general settings.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا