No Arabic abstract
Performance of investment managers are evaluated in comparison with benchmarks, such as financial indices. Due to the operational constraint that most professional databases do not track the change of constitution of benchmark portfolios, standard tests of performance suffer from the look-ahead benchmark bias, when they use the assets constituting the benchmarks of reference at the end of the testing period, rather than at the beginning of the period. Here, we report that the look-ahead benchmark bias can exhibit a surprisingly large amplitude for portfolios of common stocks (up to 8% annum for the S&P500 taken as the benchmark) -- while most studies have emphasized related survival biases in performance of mutual and hedge funds for which the biases can be expected to be even larger. We use the CRSP database from 1926 to 2006 and analyze the running top 500 US capitalizations to demonstrate that this bias can account for a gross overestimation of performance metrics such as the Sharpe ratio as well as an underestimation of risk, as measured for instance by peak-to-valley drawdowns. We demonstrate the presence of a significant bias in the estimation of the survival and look-ahead biases studied in the literature. A general methodology to test the properties of investment strategies is advanced in terms of random strategies with similar investment constraints.
The emergence of robust optimization has been driven primarily by the necessity to address the demerits of the Markowitz model. There has been a noteworthy debate regarding consideration of robust approaches as superior or at par with the Markowitz model, in terms of portfolio performance. In order to address this skepticism, we perform empirical analysis of three robust optimization models, namely the ones based on box, ellipsoidal and separable uncertainty sets. We conclude that robust approaches can be considered as a viable alternative to the Markowitz model, not only in simulated data but also in a real market setup, involving the Indian indices of S&P BSE 30 and S&P BSE 100. Finally, we offer qualitative and quantitative justification regarding the practical usefulness of robust optimization approaches from the point of view of number of stocks, sample size and types of data.
We find economically and statistically significant gains when using machine learning for portfolio allocation between the market index and risk-free asset. Optimal portfolio rules for time-varying expected returns and volatility are implemented with two Random Forest models. One model is employed in forecasting the sign probabilities of the excess return with payout yields. The second is used to construct an optimized volatility estimate. Reward-risk timing with machine learning provides substantial improvements over the buy-and-hold in utility, risk-adjusted returns, and maximum drawdowns. This paper presents a new theoretical basis and unifying framework for machine learning applied to both return- and volatility-timing.
We introduce diversified risk parity embedded with various reward-risk measures and more generic allocation rules for portfolio construction. We empirically test advanced reward-risk parity strategies and compare their performance with an equally-weighted risk portfolio in various asset universes. The reward-risk parity strategies we tested exhibit consistent outperformance evidenced by higher average returns, Sharpe ratios, and Calmar ratios. The alternative allocations also reflect less downside risks in Value-at-Risk, conditional Value-at-Risk, and maximum drawdown. In addition to the enhanced performance and reward-risk profile, transaction costs can be reduced by lowering turnover rates. The Carhart four-factor analysis also indicates that the diversified reward-risk parity allocations gain superior performance.
In this paper we show how to implement in a simple way some complex real-life constraints on the portfolio optimization problem, so that it becomes amenable to quantum optimization algorithms. Specifically, first we explain how to obtain the best investment portfolio with a given target risk. This is important in order to produce portfolios with different risk profiles, as typically offered by financial institutions. Second, we show how to implement individual investment bands, i.e., minimum and maximum possible investments for each asset. This is also important in order to impose diversification and avoid corner solutions. Quite remarkably, we show how to build the constrained cost function as a quadratic binary optimization (QUBO) problem, this being the natural input of quantum annealers. The validity of our implementation is proven by finding the optimal portfolios, using D-Wave Hybrid and its Advantage quantum processor, on portfolios built with all the assets from S&P100 and S&P500. Our results show how practical daily constraints found in quantitative finance can be implemented in a simple way in current NISQ quantum processors, with real data, and under realistic market conditions. In combination with clustering algorithms, our methods would allow to replicate the behaviour of more complex indexes, such as Nasdaq Composite or others, in turn being particularly useful to build and replicate Exchange Traded Funds (ETF).
We study the optimal portfolio allocation problem from a Bayesian perspective using value at risk (VaR) and conditional value at risk (CVaR) as risk measures. By applying the posterior predictive distribution for the future portfolio return, we derive relevant quantiles needed in the computations of VaR and CVaR, and express the optimal portfolio weights in terms of observed data only. This is in contrast to the conventional method where the optimal solution is based on unobserved quantities which are estimated, leading to suboptimality. We also obtain the expressions for the weights of the global minimum VaR and CVaR portfolios, and specify conditions for their existence. It is shown that these portfolios may not exist if the confidence level used for the VaR or CVaR computation are too low. Moreover, analytical expressions for the mean-VaR and mean-CVaR efficient frontiers are presented and the extension of theoretical results to general coherent risk measures is provided. One of the main advantages of the suggested Bayesian approach is that the theoretical results are derived in the finite-sample case and thus they are exact and can be applied to large-dimensional portfolios. By using simulation and real market data, we compare the new Bayesian approach to the conventional method by studying the performance and existence of the global minimum VaR portfolio and by analysing the estimated efficient frontiers. It is concluded that the Bayesian approach outperforms the conventional one, in particular at predicting the out-of-sample VaR.