Do you want to publish a course? Click here

A Directional Multivariate Value at Risk

171   0   0.0 ( 0 )
 Publication date 2015
and research's language is English




Ask ChatGPT about the research

In economics, insurance and finance, value at risk (VaR) is a widely used measure of the risk of loss on a specific portfolio of financial assets. For a given portfolio, time horizon, and probability $alpha$, the $100alpha%$ VaR is defined as a threshold loss value, such that the probability that the loss on the portfolio over the given time horizon exceeds this value is $alpha$. That is to say, it is a quantile of the distribution of the losses, which has both good analytic properties and easy interpretation as a risk measure. However, its extension to the multivariate framework is not unique because a unique definition of multivariate quantile does not exist. In the current literature, the multivariate quantiles are related to a specific partial order considered in $mathbb{R}^{n}$, or to a property of the univariate quantile that is desirable to be extended to $mathbb{R}^{n}$. In this work, we introduce a multivariate value at risk as a vector-valued directional risk measure, based on a directional multivariate quantile, which has recently been introduced in the literature. The directional approach allows the manager to consider external information or risk preferences in her/his analysis. We have derived some properties of the risk measure and we have compared the univariate textit{VaR} over the marginals with the components of the directional multivariate VaR. We have also analyzed the relationship between some families of copulas, for which it is possible to obtain closed forms of the multivariate VaR that we propose. Finally, comparisons with other alternative multivariate VaR given in the literature, are provided in terms of robustness.



rate research

Read More

A new risk measure, the lambda value at risk (Lambda VaR), has been recently proposed from a theoretical point of view as a generalization of the value at risk (VaR). The Lambda VaR appears attractive for its potential ability to solve several problems of the VaR. In this paper we propose three nonparametric backtesting methodologies for the Lambda VaR which exploit different features. Two of these tests directly assess the correctness of the level of coverage predicted by the model. One of these tests is bilateral and provides an asymptotic result. A third test assess the accuracy of the Lambda VaR that depends on the choice of the P&L distribution. However, this test requires the storage of more information. Finally, we perform a backtesting exercise and we compare our results with the ones from Hitaj and Peri (2015)
Several well-established benchmark predictors exist for Value-at-Risk (VaR), a major instrument for financial risk management. Hybrid methods combining AR-GARCH filtering with skewed-$t$ residuals and the extreme value theory-based approach are particularly recommended. This study introduces yet another VaR predictor, G-VaR, which follows a novel methodology. Inspired by the recent mathematical theory of sublinear expectation, G-VaR is built upon the concept of model uncertainty, which in the present case signifies that the inherent volatility of financial returns cannot be characterized by a single distribution but rather by infinitely many statistical distributions. By considering the worst scenario among these potential distributions, the G-VaR predictor is precisely identified. Extensive experiments on both the NASDAQ Composite Index and S&P500 Index demonstrate the excellent performance of the G-VaR predictor, which is superior to most existing benchmark VaR predictors.
We derive bounds on the distribution function, therefore also on the Value-at-Risk, of $varphi(mathbf X)$ where $varphi$ is an aggregation function and $mathbf X = (X_1,dots,X_d)$ is a random vector with known marginal distributions and partially known dependence structure. More specifically, we analyze three types of available information on the dependence structure: First, we consider the case where extreme value information, such as the distributions of partial minima and maxima of $mathbf X$, is available. In order to include this information in the computation of Value-at-Risk bounds, we utilize a reduction principle that relates this problem to an optimization problem over a standard Frechet class, which can then be solved by means of the rearrangement algorithm or using analytical results. Second, we assume that the copula of $mathbf X$ is known on a subset of its domain, and finally we consider the case where the copula of $mathbf X$ lies in the vicinity of a reference copula as measured by a statistical distance. In order to derive Value-at-Risk bounds in the latter situations, we first improve the Frechet--Hoeffding bounds on copulas so as to include this additional information on the dependence structure. Then, we translate the improved Frechet--Hoeffding bounds to bounds on the Value-at-Risk using the so-called improved standard bounds. In numerical examples we illustrate that the additional information typically leads to a significant improvement of the bounds compared to the marginals-only case.
In this paper we propose a novel Bayesian methodology for Value-at-Risk computation based on parametric Product Partition Models. Value-at-Risk is a standard tool to measure and control the market risk of an asset or a portfolio, and it is also required for regulatory purposes. Its popularity is partly due to the fact that it is an easily understood measure of risk. The use of Product Partition Models allows us to remain in a Normal setting even in presence of outlying points, and to obtain a closed-form expression for Value-at-Risk computation. We present and compare two different scenarios: a product partition structure on the vector of means and a product partition structure on the vector of variances. We apply our methodology to an Italian stock market data set from Mib30. The numerical results clearly show that Product Partition Models can be successfully exploited in order to quantify market risk exposure. The obtained Value-at-Risk estimates are in full agreement with Maximum Likelihood approaches, but our methodology provides richer information about the clustering structure of the data and the presence of outlying points.
We propose a generalization of the classical notion of the $V@R_{lambda}$ that takes into account not only the probability of the losses, but the balance between such probability and the amount of the loss. This is obtained by defining a new class of law invariant risk measures based on an appropriate family of acceptance sets. The $V@R_{lambda}$ and other known law invariant risk measures turn out to be special cases of our proposal. We further prove the dual representation of Risk Measures on $mathcal{P}(% mathbb{R}).$
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا