Do you want to publish a course? Click here

Machine Learning Classification Methods and Portfolio Allocation: An Examination of Market Efficiency

312   0   0.0 ( 0 )
 Added by Yang Bai
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We design a novel framework to examine market efficiency through out-of-sample (OOS) predictability. We frame the asset pricing problem as a machine learning classification problem and construct classification models to predict return states. The prediction-based portfolios beat the market with significant OOS economic gains. We measure prediction accuracies directly. For each model, we introduce a novel application of binomial test to test the accuracy of 3.34 million return state predictions. The tests show that our models can extract useful contents from historical information to predict future return states. We provide unique economic insights about OOS predictability and machine learning models.



rate research

Read More

We find economically and statistically significant gains when using machine learning for portfolio allocation between the market index and risk-free asset. Optimal portfolio rules for time-varying expected returns and volatility are implemented with two Random Forest models. One model is employed in forecasting the sign probabilities of the excess return with payout yields. The second is used to construct an optimized volatility estimate. Reward-risk timing with machine learning provides substantial improvements over the buy-and-hold in utility, risk-adjusted returns, and maximum drawdowns. This paper presents a new theoretical basis and unifying framework for machine learning applied to both return- and volatility-timing.
The paper predicts an Efficient Market Property for the equity market, where stocks, when denominated in units of the growth optimal portfolio (GP), have zero instantaneous expected returns. Well-diversified equity portfolios are shown to approximate the GP, which explains the well-observed good performance of equally weighted portfolios. The proposed hierarchically weighted index (HWI) is shown to be an even better proxy of the GP. It sets weights equal within industrial and geographical groupings of stocks. When using the HWI as proxy of the GP the Efficient Market Property cannot be easily rejected and appears to be very robust.
The popularity of deep reinforcement learning (DRL) methods in economics have been exponentially increased. DRL through a wide range of capabilities from reinforcement learning (RL) and deep learning (DL) for handling sophisticated dynamic business environments offers vast opportunities. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this work, we first consider a brief review of DL, RL, and deep RL methods in diverse applications in economics providing an in-depth insight into the state of the art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher accuracy as compared to the traditional algorithms while facing real economic problems at the presence of risk parameters and the ever-increasing uncertainties.
This article provides an overview of Supervised Machine Learning (SML) with a focus on applications to banking. The SML techniques covered include Bagging (Random Forest or RF), Boosting (Gradient Boosting Machine or GBM) and Neural Networks (NNs). We begin with an introduction to ML tasks and techniques. This is followed by a description of: i) tree-based ensemble algorithms including Bagging with RF and Boosting with GBMs, ii) Feedforward NNs, iii) a discussion of hyper-parameter optimization techniques, and iv) machine learning interpretability. The paper concludes with a comparison of the features of different ML algorithms. Examples taken from credit risk modeling in banking are used throughout the paper to illustrate the techniques and interpret the results of the algorithms.
The aim of this study is to investigate quantitatively whether share prices deviated from company fundamentals in the stock market crash of 2008. For this purpose, we use a large database containing the balance sheets and share prices of 7,796 worldwide companies for the period 2004 through 2013. We develop a panel regression model using three financial indicators--dividends per share, cash flow per share, and book value per share--as explanatory variables for share price. We then estimate individual company fundamentals for each year by removing the time fixed effects from the two-way fixed effects model, which we identified as the best of the panel regression models. One merit of our model is that we are able to extract unobservable factors of company fundamentals by using the individual fixed effects. Based on these results, we analyze the market anomaly quantitatively using the divergence rate--the rate of the deviation of share price from a companys fundamentals. We find that share prices on average were overvalued in the period from 2005 to 2007, and were undervalued significantly in 2008, when the global financial crisis occurred. Share prices were equivalent to the fundamentals on average in the subsequent period. Our empirical results clearly demonstrate that the worldwide stock market fluctuated excessively in the time period before and just after the global financial crisis of 2008.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا