ترغب بنشر مسار تعليمي؟ اضغط هنا

Deep Kernel Gaussian Process Based Financial Market Predictions

108   0   0.0 ( 0 )
 نشر من قبل Wei Dai
 تاريخ النشر 2021
  مجال البحث مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

The Gaussian Process with a deep kernel is an extension of the classic GP regression model and this extended model usually constructs a new kernel function by deploying deep learning techniques like long short-term memory networks. A Gaussian Process with the kernel learned by LSTM, abbreviated as GP-LSTM, has the advantage of capturing the complex dependency of financial sequential data, while retaining the ability of probabilistic inference. However, the deep kernel Gaussian Process has not been applied to forecast the conditional returns and volatility in financial market to the best of our knowledge. In this paper, grid search algorithm, used for performing hyper-parameter optimization, is integrated with GP-LSTM to predict both the conditional mean and volatility of stock returns, which are then combined together to calculate the conditional Sharpe Ratio for constructing a long-short portfolio. The experiments are performed on a dataset covering all constituents of Shenzhen Stock Exchange Component Index. Based on empirical results, we find that the GP-LSTM model can provide more accurate forecasts in stock returns and volatility, which are jointly evaluated by the performance of constructed portfolios. Further sub-period analysis of the experiment results indicates that the superiority of GP-LSTM model over the benchmark models stems from better performance in highly volatile periods.



قيم البحث

اقرأ أيضاً

Gaussian processes offer an attractive framework for predictive modeling from longitudinal data, i.e., irregularly sampled, sparse observations from a set of individuals over time. However, such methods have two key shortcomings: (i) They rely on ad hoc heuristics or expensive trial and error to choose the effective kernels, and (ii) They fail to handle multilevel correlation structure in the data. We introduce Longitudinal deep kernel Gaussian process regression (L-DKGPR), which to the best of our knowledge, is the only method to overcome these limitations by fully automating the discovery of complex multilevel correlation structure from longitudinal data. Specifically, L-DKGPR eliminates the need for ad hoc heuristics or trial and error using a novel adaptation of deep kernel learning that combines the expressive power of deep neural networks with the flexibility of non-parametric kernel methods. L-DKGPR effectively learns the multilevel correlation with a novel addictive kernel that simultaneously accommodates both time-varying and the time-invariant effects. We derive an efficient algorithm to train L-DKGPR using latent space inducing points and variational inference. Results of extensive experiments on several benchmark data sets demonstrate that L-DKGPR significantly outperforms the state-of-the-art longitudinal data analysis (LDA) methods.
The history of research in finance and economics has been widely impacted by the field of Agent-based Computational Economics (ACE). While at the same time being popular among natural science researchers for its proximity to the successful methods of physics and chemistry for example, the field of ACE has also received critics by a part of the social science community for its lack of empiricism. Yet recent trends have shifted the weights of these general arguments and potentially given ACE a whole new range of realism. At the base of these trends are found two present-day major scientific breakthroughs: the steady shift of psychology towards a hard science due to the advances of neuropsychology, and the progress of artificial intelligence and more specifically machine learning due to increasing computational power and big data. These two have also found common fields of study in the form of computational neuroscience, and human-computer interaction, among others. We outline here the main lines of a computational research study of collective economic behavior via Agent-Based Models (ABM) or Multi-Agent System (MAS), where each agent would be endowed with specific cognitive and behavioral biases known to the field of neuroeconomics, and at the same time autonomously implement rational quantitative financial strategies updated by machine learning. We postulate that such ABMs would offer a whole new range of realism.
The ultimate value of theories of the fundamental mechanisms comprising the asset price in financial systems will be reflected in the capacity of such theories to understand these systems. Although the models that explain the various states of financ ial markets offer substantial evidences from the fields of finance, mathematics, and even physics to explain states observed in the real financial markets, previous theories that attempt to fully explain the complexities of financial markets have been inadequate. In this study, we propose an artificial double auction market as an agent-based model approach to study the origin of complex states in the financial markets, characterizing important parameters with an investment strategy that can cover the dynamics of the financial market. The investment strategy of chartist traders after market information arrives should reduce market stability originating in the price fluctuations of risky assets. However, fundamentalist traders strategically submit orders with a fundamental value and, thereby stabilize the market. We construct a continuous double auction market and find that the market is controlled by a fraction of chartists, P_{c}. We show that mimicking real financial markets state, which emerges in real financial systems, is given between approximately P_{c} = 0.40 and P_{c} = 0.85, but that mimicking the efficient market hypothesis state can be generated in a range of less than P_{c} = 0.40. In particular, we observe that the mimicking market collapse state created in a value greater than P_{c} = 0.85, in which a liquidity shortage occurs, and the phase transition behavior is P_{c} = 0.85.
We investigate financial market correlations using random matrix theory and principal component analysis. We use random matrix theory to demonstrate that correlation matrices of asset price changes contain structure that is incompatible with uncorrel ated random price changes. We then identify the principal components of these correlation matrices and demonstrate that a small number of components accounts for a large proportion of the variability of the markets that we consider. We then characterize the time-evolving relationships between the different assets by investigating the correlations between the asset price time series and principal components. Using this approach, we uncover notable changes that occurred in financial markets and identify the assets that were significantly affected by these changes. We show in particular that there was an increase in the strength of the relationships between several different markets following the 2007--2008 credit and liquidity crisis.
The model describing market dynamics after a large financial crash is considered in terms of the stochastic differential equation of Ito. Physically, the model presents an overdamped Brownian particle moving in the nonstationary one-dimensional poten tial $U$ under the influence of the variable noise intensity, depending on the particle position $x$. Based on the empirical data the approximate estimation of the Kramers-Moyal coefficients $D_{1,2}$ allow to predicate quite definitely the behavior of the potential introduced by $D_1 = - partial U /partial x$ and the volatility $sim sqrt{D_2}$. It has been shown that the presented model describes well enough the best known empirical facts relative to the large financial crash of October 1987.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا