No Arabic abstract
One of the major issues studied in finance that has always intrigued, both scholars and practitioners, and to which no unified theory has yet been discovered, is the reason why prices move over time. Since there are several well-known traditional techniques in the literature to measure stock market volatility, a central point in this debate that constitutes the actual scope of this paper is to compare this common approach in which we discuss such popular techniques as the standard deviation and an innovative methodology based on Econophysics. In our study, we use the concept of Tsallis entropy to capture the nature of volatility. More precisely, what we want to find out is if Tsallis entropy is able to detect volatility in stock market indexes and to compare its values with the ones obtained from the standard deviation. Also, we shall mention that one of the advantages of this new methodology is its ability to capture nonlinear dynamics. For our purpose, we shall basically focus on the behaviour of stock market indexes and consider the CAC 40, MIB 30, NIKKEI 225, PSI 20, IBEX 35, FTSE 100 and SP 500 for a comparative analysis between the approaches mentioned above.
We investigate the probability distribution of the volatility return intervals $tau$ for the Chinese stock market. We rescale both the probability distribution $P_{q}(tau)$ and the volatility return intervals $tau$ as $P_{q}(tau)=1/bar{tau} f(tau/bar{tau})$ to obtain a uniform scaling curve for different threshold value $q$. The scaling curve can be well fitted by the stretched exponential function $f(x) sim e^{-alpha x^{gamma}}$, which suggests memory exists in $tau$. To demonstrate the memory effect, we investigate the conditional probability distribution $P_{q} (tau|tau_{0})$, the mean conditional interval $<tau|tau_{0}>$ and the cumulative probability distribution of the cluster size of $tau$. The results show clear clustering effect. We further investigate the persistence probability distribution $P_{pm}(t)$ and find that $P_{-}(t)$ decays by a power law with the exponent far different from the value 0.5 for the random walk, which further confirms long memory exists in $tau$. The scaling and long memory effect of $tau$ for the Chinese stock market are similar to those obtained from the United States and the Japanese financial markets.
Bid-ask spread is taken as an important measure of the financial market liquidity. In this article, we study the dynamics of the spread return and the spread volatility of four liquid stocks in the Chinese stock market, including the memory effect and the multifractal nature. By investigating the autocorrelation function and the Detrended Fluctuation Analysis (DFA), we find that the spread return is lack of long-range memory, while the spread volatility is long-range time correlated. Moreover, by applying the Multifractal Detrended Fluctuation Analysis (MF-DFA), the spread return is observed to possess a strong multifractality, which is similar to the dynamics of a variety of financial quantities. Differently from the spread return, the spread volatility exhibits a weak multifractal nature.
We empirically investigated the relationships between the degree of efficiency and the predictability in financial time-series data. The Hurst exponent was used as the measurement of the degree of efficiency, and the hit rate calculated from the nearest-neighbor prediction method was used for the prediction of the directions of future price changes. We used 60 market indexes of various countries. We empirically discovered that the relationship between the degree of efficiency (the Hurst exponent) and the predictability (the hit rate) is strongly positive. That is, a market index with a higher Hurst exponent tends to have a higher hit rate. These results suggested that the Hurst exponent is useful for predicting future price changes. Furthermore, we also discovered that the Hurst exponent and the hit rate are useful as standards that can distinguish emerging capital markets from mature capital markets.
The distribution of the return intervals $tau$ between volatilities above a threshold $q$ for financial records has been approximated by a scaling behavior. To explore how accurate is the scaling and therefore understand the underlined non-linear mechanism, we investigate intraday datasets of 500 stocks which consist of the Standard & Poors 500 index. We show that the cumulative distribution of return intervals has systematic deviations from scaling. We support this finding by studying the m-th moment $mu_m equiv <(tau/<tau>)^m>^{1/m}$, which show a certain trend with the mean interval $<tau>$. We generate surrogate records using the Schreiber method, and find that their cumulative distributions almost collapse to a single curve and moments are almost constant for most range of $<tau>$. Those substantial differences suggest that non-linear correlations in the original volatility sequence account for the deviations from a single scaling law. We also find that the original and surrogate records exhibit slight tendencies for short and long $<tau>$, due to the discreteness and finite size effects of the records respectively. To avoid as possible those effects for testing the multiscaling behavior, we investigate the moments in the range $10<<tau>leq100$, and find the exponent $alpha$ from the power law fitting $mu_msim<tau>^alpha$ has a narrow distribution around $alpha eq0$ which depend on m for the 500 stocks. The distribution of $alpha$ for the surrogate records are very narrow and centered around $alpha=0$. This suggests that the return interval distribution exhibit multiscaling behavior due to the non-linear correlations in the original volatility.
Volatility of financial stock is referring to the degree of uncertainty or risk embedded within a stocks dynamics. Such risk has been received huge amounts of attention from diverse financial researchers. By following the concept of regime-switching model, we proposed a non-parametric approach, named encoding-and-decoding, to discover multiple volatility states embedded within a discrete time series of stock returns. The encoding is performed across the entire span of temporal time points for relatively extreme events with respect to a chosen quantile-based threshold. As such the return time series is transformed into Bernoulli-variable processes. In the decoding phase, we computationally seek for locations of change points via estimations based on a new searching algorithm conjunction to the Bayesian information criterion applied on the observed collection of recurrence times upon the binary process. Besides the independence required for building the Geometric distributional likelihood function, the proposed approach can functionally partition the entire return time series into a collection of homogeneous segments without any assumptions of dynamic structure and underlying distributions. In the numerical experiments, our approach is found favorably compared with Viterbis under Hidden Markov Model (HMM) settings. In the real data applications, volatility dynamics of every single stock of S&P500 are computed and revealed. Then, a non-linear dependency of any stock-pair is derived by measuring through concurrent volatility states. Finally, various networks dealing with distinct financial implications are consequently established to represent different aspects of global connectivity among all stocks in S&P500.