ترغب بنشر مسار تعليمي؟ اضغط هنا

A model of returns for the post-credit-crunch reality: Hybrid Brownian motion with price feedback

112   0   0.0 ( 0 )
 نشر من قبل William Shaw
 تاريخ النشر 2009
  مجال البحث مالية
والبحث باللغة English
 تأليف William T. Shaw




اسأل ChatGPT حول البحث

The market events of 2007-2009 have reinvigorated the search for realistic return models that capture greater likelihoods of extreme movements. In this paper we model the medium-term log-return dynamics in a market with both fundamental and technical traders. This is based on a Poisson trade arrival model with variable size orders. With simplifications we are led to a hybrid SDE mixing both arithmetic and geometric Brownian motions, whose solution is given by a class of integrals of exponentials of one Brownian motion against another, in forms considered by Yor and collaborators. The reduction of the hybrid SDE to a single Brownian motion leads to an SDE of the form considered by Nagahara, which is a type of Pearson diffusion, or equivalently a hyperbolic OU SDE. Various dynamics and equilibria are possible depending on the balance of trades. Under mean-reverting circumstances we arrive naturally at an equilibrium fat-tailed return distribution with a Student or Pearson Type IV form. Under less restrictive assumptions richer dynamics are possible, including bimodal structures. The phenomenon of variance explosion is identified that gives rise to much larger price movements that might have a priori been expected, so that $25sigma$ events are significantly more probable. We exhibit simple example solutions of the Fokker-Planck equation that shows how such variance explosion can hide beneath a standard Gaussian facade. These are elementary members of an extended class of distributions with a rich and varied structure, capable of describing a wide range of market behaviours. Several approaches to the density function are possible, and an example of the computation of a hyperbolic VaR is given. The model also suggests generalizations of the Bougerol identity.



قيم البحث

اقرأ أيضاً

177 - Lucien Boulet 2021
Several academics have studied the ability of hybrid models mixing univariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and neural networks to deliver better volatility predictions than purely econometric models. Despit e presenting very promising results, the generalization of such models to the multivariate case has yet to be studied. Moreover, very few papers have examined the ability of neural networks to predict the covariance matrix of asset returns, and all use a rather small number of assets, thus not addressing what is known as the curse of dimensionality. The goal of this paper is to investigate the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns. To do so, we propose a new model, based on multivariate GARCHs that decompose volatility and correlation predictions. The volatilities are here forecast using hybrid neural networks while correlations follow a traditional econometric process. After implementing the models in a minimum variance portfolio framework, our results are as follows. First, the addition of GARCH parameters as inputs is beneficial to the model proposed. Second, the use of one-hot-encoding to help the neural network differentiate between each stock improves the performance. Third, the new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart that uses univariate GARCHs to predict the volatilities.
In this work we build a stack of machine learning models aimed at composing a state-of-the-art credit rating and default prediction system, obtaining excellent out-of-sample performances. Our approach is an excursion through the most recent ML / AI c oncepts, starting from natural language processes (NLP) applied to economic sectors (textual) descriptions using embedding and autoencoders (AE), going through the classification of defaultable firms on the base of a wide range of economic features using gradient boosting machines (GBM) and calibrating their probabilities paying due attention to the treatment of unbalanced samples. Finally we assign credit ratings through genetic algorithms (differential evolution, DE). Model interpretability is achieved by implementing recent techniques such as SHAP and LIME, which explain predictions locally in features space.
The extreme event statistics plays a very important role in the theory and practice of time series analysis. The reassembly of classical theoretical results is often undermined by non-stationarity and dependence between increments. Furthermore, the c onvergence to the limit distributions can be slow, requiring a huge amount of records to obtain significant statistics, and thus limiting its practical applications. Focussing, instead, on the closely related density of near-extremes -- the distance between a record and the maximal value -- can render the statistical methods to be more suitable in the practical applications and/or validations of models. We apply this recently proposed method in the empirical validation of an adapted financial market model of the intraday market fluctuations.
In Artificial Intelligence, interpreting the results of a Machine Learning technique often termed as a black box is a difficult task. A counterfactual explanation of a particular black box attempts to find the smallest change to the input values that modifies the prediction to a particular output, other than the original one. In this work we formulate the problem of finding a counterfactual explanation as an optimization problem. We propose a new sparsity algorithm which solves the optimization problem, while also maximizing the sparsity of the counterfactual explanation. We apply the sparsity algorithm to provide a simple suggestion to publicly traded companies in order to improve their credit ratings. We validate the sparsity algorithm with a synthetically generated dataset and we further apply it to quarterly financial statements from companies in financial, healthcare and IT sectors of the US market. We provide evidence that the counterfactual explanation can capture the nature of the real statement features that changed between the current quarter and the following quarter when ratings improved. The empirical results show that the higher the rating of a company the greater the effort required to further improve credit rating.
We test three common information criteria (IC) for selecting the order of a Hawkes process with an intensity kernel that can be expressed as a mixture of exponential terms. These processes find application in high-frequency financial data modelling. The information criteria are Akaikes information criterion (AIC), the Bayesian information criterion (BIC) and the Hannan-Quinn criterion (HQ). Since we work with simulated data, we are able to measure the performance of model selection by the success rate of the IC in selecting the model that was used to generate the data. In particular, we are interested in the relation between correct model selection and underlying sample size. The analysis includes realistic sample sizes and parameter sets from recent literature where parameters were estimated using empirical financial intra-day data. We compare our results to theoretical predictions and similar empirical findings on the asymptotic distribution of model selection for consistent and inconsistent IC.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا