Do you want to publish a course? Click here

Evidence for criticality in financial data

93   0   0.0 ( 0 )
 Added by Guiomar Ruiz Prof.
 Publication date 2017
  fields Financial
and research's language is English




Ask ChatGPT about the research

We provide evidence that cumulative distributions of absolute normalized returns for the $100$ American companies with the highest market capitalization, uncover a critical behavior for different time scales $Delta t$. Such cumulative distributions, in accordance with a variety of complex --and financial-- systems, can be modeled by the cumulative distribution functions of $q$-Gaussians, the distribution function that, in the context of nonextensive statistical mechanics, maximizes a non-Boltzmannian entropy. These $q$-Gaussians are characterized by two parameters, namely $(q,beta)$, that are uniquely defined by $Delta t$. From these dependencies, we find a monotonic relationship between $q$ and $beta$, which can be seen as evidence of criticality. We numerically determine the various exponents which characterize this criticality.



rate research

Read More

We propose a novel approach that allows to calculate Hilbert transform based complex correlation for unevenly spaced data. This method is especially suitable for high frequency trading data, which are of a particular interest in finance. Its most important feature is the ability to take into account lead-lag relations on different scales, without knowing them in advance. We also present results obtained with this approach while working on Tokyo Stock Exchange intraday quotations. We show that individual sectors and subsectors tend to form important market components which may follow each other with small but significant delays. These components may be recognized by analysing eigenvectors of complex correlation matrix for Nikkei 225 stocks. Interestingly, sectorial components are also found in eigenvectors corresponding to the bulk eigenvalues, traditionally treated as noise.
Data augmentation methods in combination with deep neural networks have been used extensively in computer vision on classification tasks, achieving great success; however, their use in time series classification is still at an early stage. This is even more so in the field of financial prediction, where data tends to be small, noisy and non-stationary. In this paper we evaluate several augmentation methods applied to stocks datasets using two state-of-the-art deep learning models. The results show that several augmentation methods significantly improve financial performance when used in combination with a trading strategy. For a relatively small dataset ($approx30K$ samples), augmentation methods achieve up to $400%$ improvement in risk adjusted return performance; for a larger stock dataset ($approx300K$ samples), results show up to $40%$ improvement.
We test three common information criteria (IC) for selecting the order of a Hawkes process with an intensity kernel that can be expressed as a mixture of exponential terms. These processes find application in high-frequency financial data modelling. The information criteria are Akaikes information criterion (AIC), the Bayesian information criterion (BIC) and the Hannan-Quinn criterion (HQ). Since we work with simulated data, we are able to measure the performance of model selection by the success rate of the IC in selecting the model that was used to generate the data. In particular, we are interested in the relation between correct model selection and underlying sample size. The analysis includes realistic sample sizes and parameter sets from recent literature where parameters were estimated using empirical financial intra-day data. We compare our results to theoretical predictions and similar empirical findings on the asymptotic distribution of model selection for consistent and inconsistent IC.
Data normalization is one of the most important preprocessing steps when building a machine learning model, especially when the model of interest is a deep neural network. This is because deep neural network optimized with stochastic gradient descent is sensitive to the input variable range and prone to numerical issues. Different than other types of signals, financial time-series often exhibit unique characteristics such as high volatility, non-stationarity and multi-modality that make them challenging to work with, often requiring expert domain knowledge for devising a suitable processing pipeline. In this paper, we propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series. The proposed normalization scheme, which takes into account the bimodal characteristic of financial multivariate time-series, requires no expert knowledge to preprocess a financial time-series since this step is formulated as part of the end-to-end optimization process. Our experiments, conducted with state-of-the-arts neural networks and high-frequency data from two large-scale limit order books coming from the Nordic and US markets, show significant improvements over other normalization techniques in forecasting future stock price dynamics.
Many fits of Hawkes processes to financial data look rather good but most of them are not statistically significant. This raises the question of what part of market dynamics this model is able to account for exactly. We document the accuracy of such processes as one varies the time interval of calibration and compare the performance of various types of kernels made up of sums of exponentials. Because of their around-the-clock opening times, FX markets are ideally suited to our aim as they allow us to avoid the complications of the long daily overnight closures of equity markets. One can achieve statistical significance according to three simultaneous tests provided that one uses kernels with two exponentials for fitting an hour at a time, and two or three exponentials for full days, while longer periods could not be fitted within statistical satisfaction because of the non-stationarity of the endogenous process. Fitted timescales are relatively short and endogeneity factor is high but sub-critical at about 0.8.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا