No Arabic abstract
Measuring the corporate default risk is broadly important in economics and finance. Quantitative methods have been developed to predictively assess future corporate default probabilities. However, as a more difficult yet crucial problem, evaluating the uncertainties associated with the default predictions remains little explored. In this paper, we attempt to fill this blank by developing a procedure for quantifying the level of associated uncertainties upon carefully disentangling multiple contributing sources. Our framework effectively incorporates broad information from historical default data, corporates financial records, and macroeconomic conditions by a) characterizing the default mechanism, and b) capturing the future dynamics of various features contributing to the default mechanism. Our procedure overcomes the major challenges in this large scale statistical inference problem and makes it practically feasible by using parsimonious models, innovative methods, and modern computational facilities. By predicting the marketwide total number of defaults and assessing the associated uncertainties, our method can also be applied for evaluating the aggregated market credit risk level. Upon analyzing a US market data set, we demonstrate that the level of uncertainties associated with default risk assessments is indeed substantial. More informatively, we also find that the level of uncertainties associated with the default risk predictions is correlated with the level of default risks, indicating potential for new scopes in practical applications including improving the accuracy of default risk assessments.
We propose a method to assess the intrinsic risk carried by a financial position $X$ when the agent faces uncertainty about the pricing rule assigning its present value. Our approach is inspired by a new interpretation of the quasiconvex duality in a Knightian setting, where a family of probability measures replaces the single reference probability and is then applied to value financial positions. Diametrically, our construction of Value&Risk measures is based on the selection of a basket of claims to test the reliability of models. We compare a random payoff $X$ with a given class of derivatives written on $X$ , and use these derivatives to textquotedblleft testtextquotedblright the pricing measures. We further introduce and study a general class of Value&Risk measures $% R(p,X,mathbb{P})$ that describes the additional capital that is required to make $X$ acceptable under a probability $mathbb{P}$ and given the initial price $p$ paid to acquire $X$.
Recently, incomplete-market techniques have been used to develop a model applicable to credit default swaps (CDSs) with results obtained that are quite different from those obtained using the market-standard model. This article makes use of the new incomplete-market model to further study CDS hedging and extends the model so that it is capable treating single-name CDS portfolios. Also, a hedge called the vanilla hedge is described, and with it, analytic results are obtained explaining the striking features of the plot of no-arbitrage bounds versus CDS maturity for illiquid CDSs. The valuation process that follows from the incomplete-market model is an integrated modelling and risk management procedure, that first uses the model to find the arbitrage-free range of fair prices, and then requires risk management professionals for both the buyer and the seller to find, as a basis for negotiation, prices that both respect the range of fair prices determined by the model, and also benefit their firms. Finally, in a section on numerical results, the striking behavior of the no-arbitrage bounds as a function of CDS maturity is illustrated, and several examples describe the reduction in risk by the hedging of single-name CDS portfolios.
The risk and return profiles of a broad class of dynamic trading strategies, including pairs trading and other statistical arbitrage strategies, may be characterized in terms of excursions of the market price of a portfolio away from a reference level. We propose a mathematical framework for the risk analysis of such strategies, based on a description in terms of price excursions, first in a pathwise setting, without probabilistic assumptions, then in a Markovian setting. We introduce the notion of delta-excursion, defined as a path which deviates by delta from a reference level before returning to this level. We show that every continuous path has a unique decomposition into delta-excursions, which is useful for the scenario analysis of dynamic trading strategies, leading to simple expressions for the number of trades, realized profit, maximum loss and drawdown. As delta is decreased to zero, properties of this decomposition relate to the local time of the path. When the underlying asset follows a Markov process, we combine these results with Itos excursion theory to obtain a tractable decomposition of the process as a concatenation of independent delta-excursions, whose distribution is described in terms of Itos excursion measure. We provide analytical results for linear diffusions and give new examples of stochastic processes for flexible and tractable modeling of excursions. Finally, we describe a non-parametric scenario simulation method for generating paths whose excursion properties match those observed in empirical data.
The paper analyzes risk assessment for cash flows in continuous time using the notion of convex risk measures for processes. By combining a decomposition result for optional measures, and a dual representation of a convex risk measure for bounded cd processes, we show that this framework provides a systematic approach to the both issues of model ambiguity, and uncertainty about the time value of money. We also establish a link between risk measures for processes and BSDEs.
In this paper, we study general monetary risk measures (without any convexity or weak convexity). A monetary (respectively, positively homogeneous) risk measure can be characterized as the lower envelope of a family of convex (respectively, coherent) risk measures. The proof does not depend on but easily leads to the classical representation theorems for convex and coherent risk measures. When the law-invariance and the SSD (second-order stochastic dominance)-consistency are involved, it is not the convexity (respectively, coherence) but the comonotonic convexity (respectively, comonotonic coherence) of risk measures that can be used for such kind of lower envelope characterizations in a unified form. The representation of a law-invariant risk measure in terms of VaR is provided.