Do you want to publish a course? Click here

The Ups and Downs of Modeling Financial Time Series with Wiener Process Mixtures

139   0   0.0 ( 0 )
 Added by Damien Challet
 Publication date 2009
  fields Physics
and research's language is English




Ask ChatGPT about the research

Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Levy distributions; we also show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.



rate research

Read More

116 - G. M. Viswanathan 2006
A challenging problem in physics concerns the possibility of forecasting rare but extreme phenomena such as large earthquakes, financial market crashes, and material rupture. A promising line of research involves the early detection of precursory log-periodic oscillations to help forecast extreme events in collective phenomena where discrete scale invariance plays an important role. Here I investigate two distinct approaches towards the general problem of how to detect log-periodic oscillations in arbitrary time series without prior knowledge of the location of the moveable singularity. I first show that the problem has a definite solution in Fourier space, however the technique involved requires an unrealistically large signal to noise ratio. I then show that the quadrature signal obtained via analytic continuation onto the imaginary axis, using the Hilbert transform, necessarily retains the log-periodicities found in the original signal. This finding allows the development of a new method of detecting log-periodic oscillations that relies on calculation of the instantaneous phase of the analytic signal. I illustrate the method by applying it to the well documented stock market crash of 1987. Finally, I discuss the relevance of these findings for parametric rather than nonparametric estimation of critical times.
The process of collecting and organizing sets of observations represents a common theme throughout the history of science. However, despite the ubiquity of scientists measuring, recording, and analyzing the dynamics of different processes, an extensive organization of scientific time-series data and analysis methods has never been performed. Addressing this, annotated collections of over 35 000 real-world and model-generated time series and over 9000 time-series analysis algorithms are analyzed in this work. We introduce reduced representations of both time series, in terms of their properties measured by diverse scientific methods, and of time-series analysis methods, in terms of their behaviour on empirical time series, and use them to organize these interdisciplinary resources. This new approach to comparing across diverse scientific data and methods allows us to organize time-series datasets automatically according to their properties, retrieve alternatives to particular analysis methods developed in other scientific disciplines, and automate the selection of useful methods for time-series classification and regression tasks. The broad scientific utility of these tools is demonstrated on datasets of electroencephalograms, self-affine time series, heart beat intervals, speech signals, and others, in each case contributing novel analysis techniques to the existing literature. Highly comparative techniques that compare across an interdisciplinary literature can thus be used to guide more focused research in time-series analysis for applications across the scientific disciplines.
Forecasting based on financial time-series is a challenging task since most real-world data exhibits nonstationary property and nonlinear dependencies. In addition, different data modalities often embed different nonlinear relationships which are difficult to capture by human-designed models. To tackle the supervised learning task in financial time-series prediction, we propose the application of a recently formulated algorithm that adaptively learns a mapping function, realized by a heterogeneous neural architecture composing of Generalized Operational Perceptron, given a set of labeled data. With a modified objective function, the proposed algorithm can accommodate the frequently observed imbalanced data distribution problem. Experiments on a large-scale Limit Order Book dataset demonstrate that the proposed algorithm outperforms related algorithms, including tensor-based methods which have access to a broader set of input information.
Many time series produced by complex systems are empirically found to follow power-law distributions with different exponents $alpha$. By permuting the independently drawn samples from a power-law distribution, we present non-trivial bounds on the memory strength (1st-order autocorrelation) as a function of $alpha$, which are markedly different from the ordinary $pm 1$ bounds for Gaussian or uniform distributions. When $1 < alpha leq 3$, as $alpha$ grows bigger, the upper bound increases from 0 to +1 while the lower bound remains 0; when $alpha > 3$, the upper bound remains +1 while the lower bound descends below 0. Theoretical bounds agree well with numerical simulations. Based on the posts on Twitter, ratings of MovieLens, calling records of the mobile operator Orange, and browsing behavior of Taobao, we find that empirical power-law distributed data produced by human activities obey such constraints. The present findings explain some observed constraints in bursty time series and scale-free networks, and challenge the validity of measures like autocorrelation and assortativity coefficient in heterogeneous systems.
410 - Jan W. Kantelhardt 2008
Data series generated by complex systems exhibit fluctuations on many time scales and/or broad distributions of the values. In both equilibrium and non-equilibrium situations, the natural fluctuations are often found to follow a scaling relation over several orders of magnitude, allowing for a characterisation of the data and the generating complex system by fractal (or multifractal) scaling exponents. In addition, fractal and multifractal approaches can be used for modelling time series and deriving predictions regarding extreme events. This review article describes and exemplifies several methods originating from Statistical Physics and Applied Mathematics, which have been used for fractal and multifractal time series analysis.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا