Do you want to publish a course? Click here

Robust Product Markovian Quantization

86   0   0.0 ( 0 )
 Added by Ralph Rudd
 Publication date 2020
  fields Financial
and research's language is English




Ask ChatGPT about the research

Recursive marginal quantization (RMQ) allows the construction of optimal discrete grids for approximating solutions to stochastic differential equations in d-dimensions. Product Markovian quantization (PMQ) reduces this problem to d one-dimensional quantization problems by recursively constructing product quantizers, as opposed to a truly optimal quantizer. However, the standard Newton-Raphson method used in the PMQ algorithm suffers from numerical instabilities, inhibiting widespread adoption, especially for use in calibration. By directly specifying the random variable to be quantized at each time step, we show that PMQ, and RMQ in one dimension, can be expressed as standard vector quantization. This reformulation allows the application of the accelerated Lloyds algorithm in an adaptive and robust procedure. Furthermore, in the case of stochastic volatility models, we extend the PMQ algorithm by using higher-order updates for the volatility or variance process. We illustrate the technique for European options, using the Heston model, and more exotic products, using the SABR model.

rate research

Read More

In this paper we propose two efficient techniques which allow one to compute the price of American basket options. In particular, we consider a basket of assets that follow a multi-dimensional Black-Scholes dynamics. The proposed techniques, called GPR Tree (GRP-Tree) and GPR Exact Integration (GPR-EI), are both based on Machine Learning, exploited together with binomial trees or with a closed formula for integration. Moreover, these two methods solve the backward dynamic programming problem considering a Bermudan approximation of the American option. On the exercise dates, the value of the option is first computed as the maximum between the exercise value and the continuation value and then approximated by means of Gaussian Process Regression. The two methods mainly differ in the approach used to compute the continuation value: a single step of binomial tree or integration according to the probability density of the process. Numerical results show that these two methods are accurate and reliable in handling American options on very large baskets of assets. Moreover we also consider the rough Bergomi model, which provides stochastic volatility with memory. Despite this model is only bidimensional, the whole history of the process impacts on the price, and handling all this information is not obvious at all. To this aim, we present how to adapt the GPR-Tree and GPR-EI methods and we focus on pricing American options in this non-Markovian framework.
Evaluating moving average options is a tough computational challenge for the energy and commodity market as the payoff of the option depends on the prices of a certain underlying observed on a moving window so, when a long window is considered, the pricing problem becomes high dimensional. We present an efficient method for pricing Bermudan style moving average options, based on Gaussian Process Regression and Gauss-Hermite quadrature, thus named GPR-GHQ. Specifically, the proposed algorithm proceeds backward in time and, at each time-step, the continuation value is computed only in a few points by using Gauss-Hermite quadrature, and then it is learned through Gaussian Process Regression. We test the proposed approach in the Black-Scholes model, where the GPR-GHQ method is made even more efficient by exploiting the positive homogeneity of the continuation value, which allows one to reduce the problem size. Positive homogeneity is also exploited to develop a binomial Markov chain, which is able to deal efficiently with medium-long windows. Secondly, we test GPR-GHQ in the Clewlow-Strickland model, the reference framework for modeling prices of energy commodities. Finally, we consider a challenging problem which involves double non-Markovian feature, that is the rough-Bergomi model. In this case, the pricing problem is even harder since the whole history of the volatility process impacts the future distribution of the process. The manuscript includes a numerical investigation, which displays that GPR-GHQ is very accurate and it is able to handle options with a very long window, thus overcoming the problem of high dimensionality.
105 - Fabien Le Floch 2020
We present an alternative formula to price European options through cosine series expansions, under models with a known characteristic function such as the Heston stochastic volatility model. It is more robust across strikes and as fast as the original COS method.
Product Quantization (PQ) has long been a mainstream for generating an exponentially large codebook at very low memory/time cost. Despite its success, PQ is still tricky for the decomposition of high-dimensional vector space, and the retraining of model is usually unavoidable when the code length changes. In this work, we propose a deep progressive quantization (DPQ) model, as an alternative to PQ, for large scale image retrieval. DPQ learns the quantization codes sequentially and approximates the original feature space progressively. Therefore, we can train the quantization codes with different code lengths simultaneously. Specifically, we first utilize the label information for guiding the learning of visual features, and then apply several quantization blocks to progressively approach the visual features. Each quantization block is designed to be a layer of a convolutional neural network, and the whole framework can be trained in an end-to-end manner. Experimental results on the benchmark datasets show that our model significantly outperforms the state-of-the-art for image retrieval. Our model is trained once for different code lengths and therefore requires less computation time. Additional ablation study demonstrates the effect of each component of our proposed model. Our code is released at https://github.com/cfm-uestc/DPQ.
Product quantization (PQ) is a widely used technique for ad-hoc retrieval. Recent studies propose supervised PQ, where the embedding and quantization models can be jointly trained with supervised learning. However, there is a lack of appropriate formulation of the joint training objective; thus, the improvements over previous non-supervised baselines are limited in reality. In this work, we propose the Matching-oriented Product Quantization (MoPQ), where a novel objective Multinoulli Contrastive Loss (MCL) is formulated. With the minimization of MCL, we are able to maximize the matching probability of query and ground-truth key, which contributes to the optimal retrieval accuracy. Given that the exact computation of MCL is intractable due to the demand of vast contrastive samples, we further propose the Differentiable Cross-device Sampling (DCS), which significantly augments the contrastive samples for precise approximation of MCL. We conduct extensive experimental studies on four real-world datasets, whose results verify the effectiveness of MoPQ. The code is available at https://github.com/microsoft/MoPQ.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا