Do you want to publish a course? Click here

Dynamic modeling of mean-reverting spreads for statistical arbitrage

142   0   0.0 ( 0 )
 Publication date 2009
  fields Financial
and research's language is English




Ask ChatGPT about the research

Statistical arbitrage strategies, such as pairs trading and its generalizations, rely on the construction of mean-reverting spreads enjoying a certain degree of predictability. Gaussian linear state-space processes have recently been proposed as a model for such spreads under the assumption that the observed process is a noisy realization of some hidden states. Real-time estimation of the unobserved spread process can reveal temporary market inefficiencies which can then be exploited to generate excess returns. Building on previous work, we embrace the state-space framework for modeling spread processes and extend this methodology along three different directions. First, we introduce time-dependency in the model parameters, which allows for quick adaptation to changes in the data generating process. Second, we provide an on-line estimation algorithm that can be constantly run in real-time. Being computationally fast, the algorithm is particularly suitable for building aggressive trading strategies based on high-frequency data and may be used as a monitoring device for mean-reversion. Finally, our framework naturally provides informative uncertainty measures of all the estimated parameters. Experimental results based on Monte Carlo simulations and historical equity data are discussed, including a co-integration relationship involving two exchange-traded funds.



rate research

Read More

The paper solves the problem of optimal portfolio choice when the parameters of the asset returns distribution, like the mean vector and the covariance matrix are unknown and have to be estimated by using historical data of the asset returns. The new approach employs the Bayesian posterior predictive distribution which is the distribution of the future realization of the asset returns given the observable sample. The parameters of the posterior predictive distributions are functions of the observed data values and, consequently, the solution of the optimization problem is expressed in terms of data only and does not depend on unknown quantities. In contrast, the optimization problem of the traditional approach is based on unknown quantities which are estimated in the second step leading to a suboptimal solution. We also derive a very useful stochastic representation of the posterior predictive distribution whose application leads not only to the solution of the considered optimization problem, but provides the posterior predictive distribution of the optimal portfolio return used to construct a prediction interval. A Bayesian efficient frontier, a set of optimal portfolios obtained by employing the posterior predictive distribution, is constructed as well. Theoretically and using real data we show that the Bayesian efficient frontier outperforms the sample efficient frontier, a common estimator of the set of optimal portfolios known to be overoptimistic.
A number of recent emerging applications call for studying data streams, potentially infinite flows of information updated in real-time. When multiple co-evolving data streams are observed, an important task is to determine how these streams depend on each other, accounting for dynamic dependence patterns without imposing any restrictive probabilistic law governing this dependence. In this paper we argue that flexible least squares (FLS), a penalized version of ordinary least squares that accommodates for time-varying regression coefficients, can be deployed successfully in this context. Our motivating application is statistical arbitrage, an investment strategy that exploits patterns detected in financial data streams. We demonstrate that FLS is algebraically equivalent to the well-known Kalman filter equations, and take advantage of this equivalence to gain a better understanding of FLS and suggest a more efficient algorithm. Promising experimental results obtained from a FLS-based algorithmic trading system for the S&P 500 Futures Index are reported.
79 - Victor Olkhov 2020
This paper presents probability distributions for price and returns random processes for averaging time interval {Delta}. These probabilities determine properties of price and returns volatility. We define statistical moments for price and returns random processes as functions of the costs and the volumes of market trades aggregated during interval {Delta}. These sets of statistical moments determine characteristic functionals for price and returns probability distributions. Volatilities are described by first two statistical moments. Second statistical moments are described by functions of second degree of the cost and the volumes of market trades aggregated during interval {Delta}. We present price and returns volatilities as functions of number of trades and second degree costs and volumes of market trades aggregated during interval {Delta}. These expressions support numerous results on correlations between returns volatility, number of trades and the volume of market transactions. Forecasting the price and returns volatilities depend on modeling the second degree of the costs and the volumes of market trades aggregated during interval {Delta}. Second degree market trades impact second degree of macro variables and expectations. Description of the second degree market trades, macro variables and expectations doubles the complexity of the current macroeconomic and financial theory.
The 2008 mortgage crisis is an example of an extreme event. Extreme value theory tries to estimate such tail risks. Modern finance practitioners prefer Expected Shortfall based risk metrics (which capture tail risk) over traditional approaches like volatility or even Value-at-Risk. This paper provides a quantum annealing algorithm in QUBO form for a dynamic asset allocation problem using expected shortfall constraint. It was motivated by the need to refine the current quantum algorithms for Markowitz type problems which are academically interesting but not useful for practitioners. The algorithm is dynamic and the risk target emerges naturally from the market volatility. Moreover, it avoids complicated statistics like generalized pareto distribution. It translates the problem into qubit form suitable for implementation by a quantum annealer like D-Wave. Such QUBO algorithms are expected to be solved faster using quantum annealing systems than any classical algorithm using classical computer (but yet to be demonstrated at scale).
In a general semimartingale financial model, we study the stability of the No Arbitrage of the First Kind (NA1) (or, equivalently, No Unbounded Profit with Bounded Risk) condition under initial and under progressive filtration enlargements. In both cases, we provide a simple and general condition which is sufficient to ensure this stability for any fixed semimartingale model. Furthermore, we give a characterisation of the NA1 stability for all semimartingale models.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا