No Arabic abstract
The leverage effect-- the correlation between an assets return and its volatility-- has played a key role in forecasting and understanding volatility and risk. While it is a long standing consensus that leverage effects exist and improve forecasts, empirical evidence paradoxically do not show that most individual stocks exhibit this phenomena, mischaracterizing risk and therefore leading to poor predictive performance. We examine this paradox, with the goal to improve density forecasts, by relaxing the assumption of linearity in the leverage effect. Nonlinear generalizations of the leverage effect are proposed within the Bayesian stochastic volatility framework in order to capture flexible leverage structures, where small fluctuations in prices have a different effect from large shocks. Efficient Bayesian sequential computation is developed and implemented to estimate this effect in a practical, on-line manner. Examining 615 stocks that comprise the S&P500 and Nikkei 225, we find that relaxing the linear assumption to our proposed nonlinear leverage effect function improves predictive performances for 89% of all stocks compared to the conventional model assumption.
We build a simple model of leveraged asset purchases with margin calls. Investment funds use what is perhaps the most basic financial strategy, called value investing, i.e. systematically attempting to buy underpriced assets. When funds do not borrow, the price fluctuations of the asset are normally distributed and uncorrelated across time. All this changes when the funds are allowed to leverage, i.e. borrow from a bank, to purchase more assets than their wealth would otherwise permit. During good times competition drives investors to funds that use more leverage, because they have higher profits. As leverage increases price fluctuations become heavy tailed and display clustered volatility, similar to what is observed in real markets. Previous explanations of fat tails and clustered volatility depended on irrational behavior, such as trend following. Here instead this comes from the fact that leverage limits cause funds to sell into a falling market: A prudent bank makes itself locally safer by putting a limit to leverage, so when a fund exceeds its leverage limit, it must partially repay its loan by selling the asset. Unfortunately this sometimes happens to all the funds simultaneously when the price is already falling. The resulting nonlinear feedback amplifies large downward price movements. At the extreme this causes crashes, but the effect is seen at every time scale, producing a power law of price disturbances. A standard (supposedly more sophisticated) risk control policy in which individual banks base leverage limits on volatility causes leverage to rise during periods of low volatility, and to contract more quickly when volatility gets high, making these extreme fluctuations even worse.
In this paper we develop a Bayesian procedure for estimating multivariate stochastic volatility (MSV) using state space models. A multiplicative model based on inverted Wishart and multivariate singular beta distributions is proposed for the evolution of the volatility, and a flexible sequential volatility updating is employed. Being computationally fast, the resulting estimation procedure is particularly suitable for on-line forecasting. Three performance measures are discussed in the context of model selection: the log-likelihood criterion, the mean of standardized one-step forecast errors, and sequential Bayes factors. Finally, the proposed methods are applied to a data set comprising eight exchange rates vis-a-vis the US dollar.
Volatility of financial stock is referring to the degree of uncertainty or risk embedded within a stocks dynamics. Such risk has been received huge amounts of attention from diverse financial researchers. By following the concept of regime-switching model, we proposed a non-parametric approach, named encoding-and-decoding, to discover multiple volatility states embedded within a discrete time series of stock returns. The encoding is performed across the entire span of temporal time points for relatively extreme events with respect to a chosen quantile-based threshold. As such the return time series is transformed into Bernoulli-variable processes. In the decoding phase, we computationally seek for locations of change points via estimations based on a new searching algorithm conjunction to the Bayesian information criterion applied on the observed collection of recurrence times upon the binary process. Besides the independence required for building the Geometric distributional likelihood function, the proposed approach can functionally partition the entire return time series into a collection of homogeneous segments without any assumptions of dynamic structure and underlying distributions. In the numerical experiments, our approach is found favorably compared with Viterbis under Hidden Markov Model (HMM) settings. In the real data applications, volatility dynamics of every single stock of S&P500 are computed and revealed. Then, a non-linear dependency of any stock-pair is derived by measuring through concurrent volatility states. Finally, various networks dealing with distinct financial implications are consequently established to represent different aspects of global connectivity among all stocks in S&P500.
This paper is concerned with the estimation of the volatility process in a stochastic volatility model of the following form: $dX_t=a_tdt+sigma_tdW_t$, where $X$ denotes the log-price and $sigma$ is a c`adl`ag semi-martingale. In the spirit of a series of recent works on the estimation of the cumulated volatility, we here focus on the instantaneous volatility for which we study estimators built as finite differences of the textit{power variations} of the log-price. We provide central limit theorems with an optimal rate depending on the local behavior of $sigma$. In particular, these theorems yield some confidence intervals for $sigma_t$.
We study the price dynamics of 65 stocks from the Dow Jones Composite Average from 1973 until 2014. We show that it is possible to define a Daily Market Volatility $sigma(t)$ which is directly observable from data. This quantity is usually indirectly defined by $r(t)=sigma(t) omega(t)$ where the $r(t)$ are the daily returns of the market index and the $omega(t)$ are i.i.d. random variables with vanishing average and unitary variance. The relation $r(t)=sigma(t) omega(t)$ alone is unable to give an operative definition of the index volatility, which remains unobservable. On the contrary, we show that using the whole information available in the market, the index volatility can be operatively defined and detected.