ترغب بنشر مسار تعليمي؟ اضغط هنا

Risk Measures and Credit Risk Under the Beta-Kotz Distribution

116   0   0.0 ( 0 )
 نشر من قبل Jean-Michel Loubes
 تاريخ النشر 2018
  مجال البحث الاحصاء الرياضي
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper considers the use for Value-at-Risk computations of the so-called Beta-Kotz distribution based on a general family of distributions including the classical Gaussian model. Actually, this work develops a new method for estimating the Value-at-Risk, the Conditional Value-at-Risk and the Economic Capital when the underlying risk factors follow a Beta-Kotz distribution. After estimating the parameters of the distribution of the loss random variable, both analytical for some particular values of the parameters and numerical approaches are provided for computing these mentioned measures. The proposed risk measures are finally applied for quantifying the potential risk of economic losses in Credit Risk.

قيم البحث

اقرأ أيضاً

We study the problem of estimating a multivariate convex function defined on a convex body in a regression setting with random design. We are interested in optimal rates of convergence under a squared global continuous $l_2$ loss in the multivariate setting $(dgeq 2)$. One crucial fact is that the minimax risks depend heavily on the shape of the support of the regression function. It is shown that the global minimax risk is on the order of $n^{-2/(d+1)}$ when the support is sufficiently smooth, but that the rate $n^{-4/(d+4)}$ is when the support is a polytope. Such differences in rates are due to difficulties in estimating the regression function near the boundary of smooth regions. We then study the natural bounded least squares estimators (BLSE): we show that the BLSE nearly attains the optimal rates of convergence in low dimensions, while suffering rate-inefficiency in high dimensions. We show that the BLSE adapts nearly parametrically to polyhedral functions when the support is polyhedral in low dimensions by a local entropy method. We also show that the boundedness constraint cannot be dropped when risk is assessed via continuous $l_2$ loss. Given rate sub-optimality of the BLSE in higher dimensions, we further study rate-efficient adaptive estimation procedures. Two general model selection methods are developed to provide sieved adaptive estimators (SAE) that achieve nearly optimal rates of convergence for particular regular classes of convex functions, while maintaining nearly parametric rate-adaptivity to polyhedral functions in arbitrary dimensions. Interestingly, the uniform boundedness constraint is unnecessary when risks are measured in discrete $l_2$ norms.
In this paper we investigate the estimation of the unknown parameters of a competing risk model based on a Weibull distributed decreasing failure rate and an exponentially distributed constant failure rate, under right censored data.likelihood estimators.
In this paper, we introduce the rich classes of conditional distortion (CoD) risk measures and distortion risk contribution ($Delta$CoD) measures as measures of systemic risk and analyze their properties and representations. The classes include the w ell-known conditional Value-at-Risk, conditional Expected Shortfall, and risk contribution measures in terms of the VaR and ES as special cases. Sufficient conditions are presented for two random vectors to be ordered by the proposed CoD-risk measures and distortion risk contribution measures. These conditions are expressed using the conventional stochastic dominance, increasing convex/concave, dispersive, and excess wealth orders of the marginals and canonical positive/negative stochastic dependence notions. Numerical examples are provided to illustrate our theoretical findings. This paper is the second in a triplet of papers on systemic risk by the same authors. In cite{DLZorder2018a}, we introduce and analyze some new stochastic orders related to systemic risk. In a third (forthcoming) paper, we attribute systemic risk to the different participants in a given risky environment.
Consider the problem of simultaneous testing for the means of independent normal observations. In this paper, we study some asymptotic optimality properties of certain multiple testing rules induced by a general class of one-group shrinkage priors in a Bayesian decision theoretic framework, where the overall loss is taken as the number of misclassified hypotheses. We assume a two-groups normal mixture model for the data and consider the asymptotic framework adopted in Bogdan et al. (2011) who introduced the notion of asymptotic Bayes optimality under sparsity in the context of multiple testing. The general class of one-group priors under study is rich enough to include, among others, the families of three parameter beta, generalized double Pareto priors, and in particular the horseshoe, the normal-exponential-gamma and the Strawderman-Berger priors. We establish that within our chosen asymptotic framework, the multiple testing rules under study asymptotically attain the risk of the Bayes Oracle up to a multiplicative factor, with the constant in the risk close to the constant in the Oracle risk. This is similar to a result obtained in Datta and Ghosh (2013) for the multiple testing rule based on the horseshoe estimator introduced in Carvalho et al. (2009, 2010). We further show that under very mild assumption on the underlying sparsity parameter, the induced decision rules based on an empirical Bayes estimate of the corresponding global shrinkage parameter proposed by van der Pas et al. (2014), attain the optimal Bayes risk up to the same multiplicative factor asymptotically. We provide a unifying argument applicable for the general class of priors under study. In the process, we settle a conjecture regarding optimality property of the generalized double Pareto priors made in Datta and Ghosh (2013). Our work also shows that the result in Datta and Ghosh (2013) can be improved further.
Recently equal risk pricing, a framework for fair derivative pricing, was extended to consider dynamic risk measures. However, all current implementations either employ a static risk measure that violates time consistency, or are based on traditional dynamic programming solution schemes that are impracticable in problems with a large number of underlying assets (due to the curse of dimensionality) or with incomplete asset dynamics information. In this paper, we extend for the first time a famous off-policy deterministic actor-critic deep reinforcement learning (ACRL) algorithm to the problem of solving a risk averse Markov decision process that models risk using a time consistent recursive expectile risk measure. This new ACRL algorithm allows us to identify high quality time consistent hedging policies (and equal risk prices) for options, such as basket options, that cannot be handled using traditional methods, or in context where only historical trajectories of the underlying assets are available. Our numerical experiments, which involve both a simple vanilla option and a more exotic basket option, confirm that the new ACRL algorithm can produce 1) in simple environments, nearly optimal hedging policies, and highly accurate prices, simultaneously for a range of maturities 2) in complex environments, good quality policies and prices using reasonable amount of computing resources; and 3) overall, hedging strategies that actually outperform the strategies produced using static risk measures when the risk is evaluated at later points of time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا