Do you want to publish a course? Click here

Designing a NISQ reservoir with maximal memory capacity for volatility forecasting

61   0   0.0 ( 0 )
 Added by Samudra Dasgupta
 Publication date 2020
  fields Financial Physics
and research's language is English




Ask ChatGPT about the research

Current noisy intermediate-scale quantum (NISQ) devices are far from being fault-tolerant, however, they permit some limited applications. In this study, we use a hybrid quantum-classical reservoir computing model to forecast the CBOE volatility index (VIX) using the S&P500 (SPX) time-series. The NISQ component of our model is executed on IBMs 53 qubit Rochester chip. We encode the SPX values in the rotation angles and linearly combine the average spin of the six-qubit register to predict the value of VIX at next time step. Our results demonstrate a potential approach towards utilizing noisy quantum devices for non-linear time-series forecasting tasks.



rate research

Read More

A key problem in financial mathematics is the forecasting of financial crashes: if we perturb asset prices, will financial institutions fail on a massive scale? This was recently shown to be a computationally intractable (NP-hard) problem. Financial crashes are inherently difficult to predict, even for a regulator which has complete information about the financial system. In this paper we show how this problem can be handled by quantum annealers. More specifically, we map the equilibrium condition of a toy-model financial network to the ground-state problem of a spin-1/2 quantum Hamiltonian with 2-body interactions, i.e., a quadratic unconstrained binary optimization (QUBO) problem. The equilibrium market values of institutions after a sudden shock to the network can then be calculated via adiabatic quantum computation and, more generically, by quantum annealers. Our procedure could be implemented on near-term quantum processors, thus providing a potentially more efficient way to assess financial equilibrium and predict financial crashes.
In this paper we show how to implement in a simple way some complex real-life constraints on the portfolio optimization problem, so that it becomes amenable to quantum optimization algorithms. Specifically, first we explain how to obtain the best investment portfolio with a given target risk. This is important in order to produce portfolios with different risk profiles, as typically offered by financial institutions. Second, we show how to implement individual investment bands, i.e., minimum and maximum possible investments for each asset. This is also important in order to impose diversification and avoid corner solutions. Quite remarkably, we show how to build the constrained cost function as a quadratic binary optimization (QUBO) problem, this being the natural input of quantum annealers. The validity of our implementation is proven by finding the optimal portfolios, using D-Wave Hybrid and its Advantage quantum processor, on portfolios built with all the assets from S&P100 and S&P500. Our results show how practical daily constraints found in quantitative finance can be implemented in a simple way in current NISQ quantum processors, with real data, and under realistic market conditions. In combination with clustering algorithms, our methods would allow to replicate the behaviour of more complex indexes, such as Nasdaq Composite or others, in turn being particularly useful to build and replicate Exchange Traded Funds (ETF).
In this paper we propose a multivariate quantile regression framework to forecast Value at Risk (VaR) and Expected Shortfall (ES) of multiple financial assets simultaneously, extending Taylor (2019). We generalize the Multivariate Asymmetric Laplace (MAL) joint quantile regression of Petrella and Raponi (2019) to a time-varying setting, which allows us to specify a dynamic process for the evolution of both VaR and ES of each asset. The proposed methodology accounts for the dependence structure among asset returns. By exploiting the properties of the MAL distribution, we then propose a new portfolio optimization method that minimizes the portfolio risk and controls for well-known characteristics of financial data. We evaluate the advantages of the proposed approach on both simulated and real data, using weekly returns on three major stock market indices. We show that our method outperforms other existing models and provides more accurate risk measure forecasts compared to univariate ones.
The paper examines the potential of deep learning to support decisions in financial risk management. We develop a deep learning model for predicting whether individual spread traders secure profits from future trades. This task embodies typical modeling challenges faced in risk and behavior forecasting. Conventional machine learning requires data that is representative of the feature-target relationship and relies on the often costly development, maintenance, and revision of handcrafted features. Consequently, modeling highly variable, heterogeneous patterns such as trader behavior is challenging. Deep learning promises a remedy. Learning hierarchical distributed representations of the data in an automatic manner (e.g. risk taking behavior), it uncovers generative features that determine the target (e.g., traders profitability), avoids manual feature engineering, and is more robust toward change (e.g. dynamic market conditions). The results of employing a deep network for operational risk forecasting confirm the feature learning capability of deep learning, provide guidance on designing a suitable network architecture and demonstrate the superiority of deep learning over machine learning and rule-based benchmarks.
314 - Tian Qiu 2008
We investigate the probability distribution of the volatility return intervals $tau$ for the Chinese stock market. We rescale both the probability distribution $P_{q}(tau)$ and the volatility return intervals $tau$ as $P_{q}(tau)=1/bar{tau} f(tau/bar{tau})$ to obtain a uniform scaling curve for different threshold value $q$. The scaling curve can be well fitted by the stretched exponential function $f(x) sim e^{-alpha x^{gamma}}$, which suggests memory exists in $tau$. To demonstrate the memory effect, we investigate the conditional probability distribution $P_{q} (tau|tau_{0})$, the mean conditional interval $<tau|tau_{0}>$ and the cumulative probability distribution of the cluster size of $tau$. The results show clear clustering effect. We further investigate the persistence probability distribution $P_{pm}(t)$ and find that $P_{-}(t)$ decays by a power law with the exponent far different from the value 0.5 for the random walk, which further confirms long memory exists in $tau$. The scaling and long memory effect of $tau$ for the Chinese stock market are similar to those obtained from the United States and the Japanese financial markets.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا