Do you want to publish a course? Click here

441 - Damien Challet 2015
The total duration of drawdowns is shown to provide a moment-free, unbiased, efficient and robust estimator of Sharpe ratios both for Gaussian and heavy-tailed price returns. We then use this quantity to infer an analytic expression of the bias of moment-based Sharpe ratio estimators as a function of the return distribution tail exponent. The heterogeneity of tail exponents at any given time among assets implies that our new method yields significantly different asset rankings than those of moment-based methods, especially in periods large volatility. This is fully confirmed by using 20 years of historical data on 3449 liquid US equities.
137 - Damien Challet 2015
A new family of nonparametric statistics, the r-statistics, is introduced. It consists of counting the number of records of the cumulative sum of the sample. The single-sample r-statistic is almost as powerful as Students t-statistic for Gaussian and uniformly distributed variables, and more powerful than the sign and Wilcoxon signed-rank statistics as long as the data are not too heavy-tailed. Three two-sample parametric r-statistics are proposed, one with a higher specificity but a smaller sensitivity than Mann-Whitney U-test and the other one a higher sensitivity but a smaller specificity. A nonparametric two-sample r-statistic is introduced, whose power is very close to that of Welch statistic for Gaussian or uniformly distributed variables.
Many fits of Hawkes processes to financial data look rather good but most of them are not statistically significant. This raises the question of what part of market dynamics this model is able to account for exactly. We document the accuracy of such processes as one varies the time interval of calibration and compare the performance of various types of kernels made up of sums of exponentials. Because of their around-the-clock opening times, FX markets are ideally suited to our aim as they allow us to avoid the complications of the long daily overnight closures of equity markets. One can achieve statistical significance according to three simultaneous tests provided that one uses kernels with two exponentials for fitting an hour at a time, and two or three exponentials for full days, while longer periods could not be fitted within statistical satisfaction because of the non-stationarity of the endogenous process. Fitted timescales are relatively short and endogeneity factor is high but sub-critical at about 0.8.
Using non-linear machine learning methods and a proper backtest procedure, we critically examine the claim that Google Trends can predict future price returns. We first review the many potential biases that may influence backtests with this kind of data positively, the choice of keywords being by far the greatest culprit. We then argue that the real question is whether such data contain more predictability than price returns themselves: our backtest yields a performance of about 17bps per week which only weakly depends on the kind of data on which predictors are based, i.e. either past price returns or Google Trends data, or both.
Demand outstrips available resources in most situations, which gives rise to competition, interaction and learning. In this article, we review a broad spectrum of multi-agent models of competition (El Farol Bar problem, Minority Game, Kolkata Paise Restaurant problem, Stable marriage problem, Parking space problem and others) and the methods used to understand them analytically. We emphasize the power of concepts and tools from statistical mechanics to understand and explain fully collective phenomena such as phase transitions and long memory, and the mapping between agent heterogeneity and physical disorder. As these methods can be applied to any large-scale model of competitive resource allocation made up of heterogeneous adaptive agent with non-linear interaction, they provide a prospective unifying paradigm for many scientific disciplines.
Tuning ones shower in some hotels may turn into a challenging coordination game with imperfect information. The temperature sensitivity increases with the number of agents, making the problem possibly unlearnable. Because there is in practice a finite number of possible tap positions, identical agents are unlikely to reach even approximately their favorite water temperature. We show that a population of agents with homogeneous strategies is evolutionary unstable, which gives insights into the emergence of heterogeneity, the latter being tempting but risky.
76 - David Bree , Damien Challet , 2010
We show that log-periodic power-law (LPPL) functions are intrinsically very hard to fit to time series. This comes from their sloppiness, the squared residuals depending very much on some combinations of parameters and very little on other ones. The time of singularity that is supposed to give an estimate of the day of the crash belongs to the latter category. We discuss in detail why and how the fitting procedure must take into account the sloppy nature of this kind of model. We then test the reliability of LPPLs on synthetic AR(1) data replicating the Hang Seng 1987 crash and show that even this case is borderline regarding predictability of divergence time. We finally argue that current methods used to estimate a probabilistic time window for the divergence time are likely to be over-optimistic.
We show that a simple and intuitive three-parameter equation fits remarkably well the evolution of the gross domestic product (GDP) in current and constant dollars of many countries during times of recession and recovery. We then argue that this equation is the response function of the economy to isolated shocks, hence that it can be used to detect large and small shocks, including those which do not lead to a recession; we also discuss its predictive power. Finally, a two-sector toy model of recession and recovery illustrates how the severity and length of recession depends on the dynamics of transfer rate between the growing and failing parts of the economy.
Starting from inhomogeneous time scaling and linear decorrelation between successive price returns, Baldovin and Stella recently proposed a way to build a model describing the time evolution of a financial index. We first make it fully explicit by using Student distributions instead of power law-truncated Levy distributions; we also show that the analytic tractability of the model extends to the larger class of symmetric generalized hyperbolic distributions and provide a full computation of their multivariate characteristic functions; more generally, the stochastic processes arising in this framework are representable as mixtures of Wiener processes. The Baldovin and Stella model, while mimicking well volatility relaxation phenomena such as the Omori law, fails to reproduce other stylized facts such as the leverage effect or some time reversal asymmetries. We discuss how to modify the dynamics of this process in order to reproduce real data more accurately.
We report activity data analysis on several open source software projects, focusing on time between modifications and on the number of files modified at once. Both have fat-tailed distributions, long-term memory, and display systematic non-trivial cross-correlations, suggesting that quiet periods are followed by cascading modifications. In addition the maturity of a software project can be measured from the exponent of the distribution of inter-modification time. Finally, the dynamics of a single file displays ageing, the average rate of modifications decaying as a function of time following a power-law.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا