ترغب بنشر مسار تعليمي؟ اضغط هنا

Computation of Expected Shortfall by fast detection of worst scenarios

80   0   0.0 ( 0 )
 نشر من قبل Bruno Bouchard
 تاريخ النشر 2020
  مجال البحث مالية
والبحث باللغة English
 تأليف Bruno Bouchard




اسأل ChatGPT حول البحث

We consider a multi-step algorithm for the computation of the historical expected shortfall such as defined by the Basel Minimum Capital Requirements for Market Risk. At each step of the algorithm, we use Monte Carlo simulations to reduce the number of historical scenarios that potentially belong to the set of worst scenarios. The number of simulations increases as the number of candidate scenarios is reduced and the distance between them diminishes. For the most naive scheme, we show that the L p-error of the estimator of the Expected Shortfall is bounded by a linear combination of the probabilities of inversion of favorable and unfavorable scenarios at each step, and of the last step Monte Carlo error associated to each scenario. By using concentration inequalities, we then show that, for sub-gamma pricing errors, the probabilities of inversion converge at an exponential rate in the number of simulated paths. We then propose an adaptative version in which the algorithm improves step by step its knowledge on the unknown parameters of interest: mean and variance of the Monte Carlo estimators of the different scenarios. Both schemes can be optimized by using dynamic programming algorithms that can be solved off-line. To our knowledge, these are the first non-asymptotic bounds for such estimators. Our hypotheses are weak enough to allow for the use of estimators for the different scenarios and steps based on the same random variables, which, in practice, reduces considerably the computational effort. First numerical tests are performed.



قيم البحث

اقرأ أيضاً

We introduce and study the main properties of a class of convex risk measures that refine Expected Shortfall by simultaneously controlling the expected losses associated with different portions of the tail distribution. The corresponding adjusted Exp ected Shortfalls quantify risk as the minimum amount of capital that has to be raised and injected into a financial position $X$ to ensure that Expected Shortfall $ES_p(X)$ does not exceed a pre-specified threshold $g(p)$ for every probability level $pin[0,1]$. Through the choice of the benchmark risk profile $g$ one can tailor the risk assessment to the specific application of interest. We devote special attention to the study of risk profiles defined by the Expected Shortfall of a benchmark random loss, in which case our risk measures are intimately linked to second-order stochastic dominance.
The 2008 mortgage crisis is an example of an extreme event. Extreme value theory tries to estimate such tail risks. Modern finance practitioners prefer Expected Shortfall based risk metrics (which capture tail risk) over traditional approaches like v olatility or even Value-at-Risk. This paper provides a quantum annealing algorithm in QUBO form for a dynamic asset allocation problem using expected shortfall constraint. It was motivated by the need to refine the current quantum algorithms for Markowitz type problems which are academically interesting but not useful for practitioners. The algorithm is dynamic and the risk target emerges naturally from the market volatility. Moreover, it avoids complicated statistics like generalized pareto distribution. It translates the problem into qubit form suitable for implementation by a quantum annealer like D-Wave. Such QUBO algorithms are expected to be solved faster using quantum annealing systems than any classical algorithm using classical computer (but yet to be demonstrated at scale).
We consider option hedging in a model where the underlying follows an exponential Levy process. We derive approximations to the variance-optimal and to some suboptimal strategies as well as to their mean squared hedging errors. The results are obtain ed by considering the Levy model as a perturbation of the Black-Scholes model. The approximations depend on the first four moments of logarithmic stock returns in the Levy model and option price sensitivities (greeks) in the limiting Black-Scholes model. We illustrate numerically that our formulas work well for a variety of Levy models suggested in the literature. From a theoretical point of view, it turns out that jumps have a similar effect on hedging errors as discrete-time hedging in the Black-Scholes model.
This article presents differential equations and solution methods for the functions of the form $Q(x) = F^{-1}(G(x))$, where $F$ and $G$ are cumulative distribution functions. Such functions allow the direct recycling of Monte Carlo samples from one distribution into samples from another. The method may be developed analytically for certain special cases, and illuminate the idea that it is a more precise form of the traditional Cornish-Fisher expansion. In this manner the model risk of distributional risk may be assessed free of the Monte Carlo noise associated with resampling. Examples are given of equations for converting normal samples to Student t, and converting exponential to hyperbolic, variance gamma and normal. In the case of the normal distribution, the change of variables employed allows the sampling to take place to good accuracy based on a single rational approximation over a very wide range of the sample space. The avoidance of any branching statement is of use in optimal GPU computations as it avoids the effect of {it warp divergence}, and we give examples of branch-free normal quantiles that offer performance improvements in a GPU environment, while retaining the best precision characteristics of well-known methods. We also offer models based on a low-probability of warp divergence. Comparisons of new and old forms are made on the Nvidia Quadro 4000, GTX 285 and 480, and Tesla C2050 GPUs. We argue that in single-precision mode, the change-of-variables approach offers performance competitive with the fastest existing scheme while substantially improving precision, and that in double-precision mode, this approach offers the most GPU-optimal Gaussian quantile yet, and without compromise on precision for Monte Carlo applications, working twice as fast as the CUDA 4 library function with increased precision.
In this paper, we propose a methodology based on piece-wise homogeneous Markov chain for credit ratings and a multivariate model of the credit spreads to evaluate the financial risk in European Union (EU). Two main aspects are considered: how the fin ancial risk is distributed among the European countries and how large is the value of the total risk. The first aspect is evaluated by means of the expected value of a dynamic entropy measure. The second one is solved by computing the evolution of the total credit spread over time. Moreover, the covariance between countries total spread allows understand any contagions in EU. The methodology is applied to real data of 24 countries for the three major agencies: Moodys, Standard and Poors, and Fitch. Obtained results suggest that both the financial risk inequality and the value of the total risk increase over time at a different rate depending on the rating agency and that the dependence structure is characterized by a strong correlation between most of European countries.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا