ترغب بنشر مسار تعليمي؟ اضغط هنا

We developed and compared Constraint Programming (CP) and Quantum Annealing (QA) approaches for rolling stock optimisation considering necessary maintenance tasks. To deal with such problems in CP we investigated specialised pruning rules and impleme nted them in a global constraint. For the QA approach, we developed quadratic unconstrained binary optimisation (QUBO) models. For testing, we use data sets based on real data from Deutsche Bahn and run the QA approach on real quantum computers from D-Wave. Classical computers are used to run the CP approach as well as tabu search for the QUBO models. We find that both approaches tend at the current development stage of the physical quantum annealers to produce comparable results, with the caveat that QUBO does not always guarantee that the maintenance constraints hold, which we fix by adjusting the QUBO model in preprocessing, based on how close the trains are to a maintenance threshold distance.
We introduce a minimalist dynamical model of wealth evolution and wealth sharing among $N$ agents as a platform to compare the relative merits of altruism and individualism. In our model, the wealth of each agent independently evolves by diffusion. F or a population of altruists, whenever any agent reaches zero wealth (that is, the agent goes bankrupt), the remaining wealth of the other $N-1$ agents is equally shared among all. The population is collectively defined to be bankrupt when its total wealth falls below a specified small threshold value. For individualists, each time an agent goes bankrupt (s)he is considered to be dead and no wealth redistribution occurs. We determine the evolution of wealth in these two societies. Altruism leads to more global median wealth at early times; eventually, however, the longest-lived individualists accumulate most of the wealth and are richer and more long lived than the altruists.
147 - Akihiko Noda 2021
This study examines the dynamic asset market linkages under the COVID-19 global pandemic based on market efficiency, in the sense of Fama (1970). Particularly, we estimate the joint degree of market efficiency by applying Ito et al.s (2014; 2017) Gen eralized Least Squares-based time-varying vector autoregression model. The empirical results show that (1) the joint degree of market efficiency changes widely over time, as shown in Los (2004) adaptive market hypothesis, (2) the COVID-19 pandemic may eliminate arbitrage and improve market efficiency through enhanced linkages between the asset markets; and (3) the market efficiency has continued to decline due to the Bitcoin bubble that emerged at the end of 2020.
Bitcoin has attracted attention from different market participants due to unpredictable price patterns. Sometimes, the price has exhibited big jumps. Bitcoin prices have also had extreme, unexpected crashes. We test the predictive power of a wide ran ge of determinants on bitcoins price direction under the continuous transfer entropy approach as a feature selection criterion. Accordingly, the statistically significant assets in the sense of permutation test on the nearest neighbour estimation of local transfer entropy are used as features or explanatory variables in a deep learning classification model to predict the price direction of bitcoin. The proposed variable selection methodology excludes the NASDAQ index and Tesla as drivers. Under different scenarios and metrics, the best results are obtained using the significant drivers during the pandemic as validation. In the test, the accuracy increased in the post-pandemic scenario of July 2020 to January 2021 without drivers. In other words, our results indicate that in times of high volatility, Bitcoin seems to autoregulate and does not need additional drivers to improve the accuracy of the price direction.
Data normalization is one of the most important preprocessing steps when building a machine learning model, especially when the model of interest is a deep neural network. This is because deep neural network optimized with stochastic gradient descent is sensitive to the input variable range and prone to numerical issues. Different than other types of signals, financial time-series often exhibit unique characteristics such as high volatility, non-stationarity and multi-modality that make them challenging to work with, often requiring expert domain knowledge for devising a suitable processing pipeline. In this paper, we propose a novel data-driven normalization method for deep neural networks that handle high-frequency financial time-series. The proposed normalization scheme, which takes into account the bimodal characteristic of financial multivariate time-series, requires no expert knowledge to preprocess a financial time-series since this step is formulated as part of the end-to-end optimization process. Our experiments, conducted with state-of-the-arts neural networks and high-frequency data from two large-scale limit order books coming from the Nordic and US markets, show significant improvements over other normalization techniques in forecasting future stock price dynamics.
In recent years, cryptocurrencies have gone from an obscure niche to a prominent place, with investment in these assets becoming increasingly popular. However, cryptocurrencies carry a high risk due to their high volatility. In this paper, criteria b ased on historical cryptocurrency data are defined in order to characterize returns and risks in different ways, in short time windows (7 and 15 days); then, the importance of criteria is analyzed by various methods and their impact is evaluated. Finally, the future plan is projected to use the knowledge obtained for the selection of investment portfolios by applying multi-criteria methods.
The principal component analysis (PCA) is a staple statistical and unsupervised machine learning technique in finance. The application of PCA in a financial setting is associated with several technical difficulties, such as numerical instability and nonstationarity. We attempt to resolve them by proposing two new variants of PCA: an iterated principal component analysis (IPCA) and an exponentially weighted moving principal component analysis (EWMPCA). Both variants rely on the Ogita-Aishima iteration as a crucial step.
Cryptocurrencies return cross-predictability and technological similarity yield information on risk propagation and market segmentation. To investigate these effects, we build a time-varying network for cryptocurrencies, based on the evolution of ret urn cross-predictability and technological similarities. We develop a dynamic covariate-assisted spectral clustering method to consistently estimate the latent community structure of cryptocurrencies network that accounts for both sets of information. We demonstrate that investors can achieve better risk diversification by investing in cryptocurrencies from different communities. A cross-sectional portfolio that implements an inter-crypto momentum trading strategy earns a 1.08% daily return. By dissecting the portfolio returns on behavioral factors, we confirm that our results are not driven by behavioral mechanisms.
177 - Lucien Boulet 2021
Several academics have studied the ability of hybrid models mixing univariate Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models and neural networks to deliver better volatility predictions than purely econometric models. Despit e presenting very promising results, the generalization of such models to the multivariate case has yet to be studied. Moreover, very few papers have examined the ability of neural networks to predict the covariance matrix of asset returns, and all use a rather small number of assets, thus not addressing what is known as the curse of dimensionality. The goal of this paper is to investigate the ability of hybrid models, mixing GARCH processes and neural networks, to forecast covariance matrices of asset returns. To do so, we propose a new model, based on multivariate GARCHs that decompose volatility and correlation predictions. The volatilities are here forecast using hybrid neural networks while correlations follow a traditional econometric process. After implementing the models in a minimum variance portfolio framework, our results are as follows. First, the addition of GARCH parameters as inputs is beneficial to the model proposed. Second, the use of one-hot-encoding to help the neural network differentiate between each stock improves the performance. Third, the new model proposed is very promising as it not only outperforms the equally weighted portfolio, but also by a significant margin its econometric counterpart that uses univariate GARCHs to predict the volatilities.
The paper discusses multivariate self- and cross-exciting processes. We define a class of multivariate point processes via their corresponding stochastic intensity processes that are driven by stochastic jumps. Essentially, there is a jump in an inte nsity process whenever the corresponding point process records an event. An attribute of our modelling class is that not only a jump is recorded at each instance, but also its magnitude. This allows large jumps to influence the intensity to a larger degree than smaller jumps. We give conditions which guarantee that the process is stable, in the sense that it does not explode, and provide a detailed discussion on when the subclass of linear models is stable. Finally, we fit our model to financial time series data from the S&P 500 and Nikkei 225 indices respectively. We conclude that a nonlinear variant from our modelling class fits the data best. This supports the observation that in times of crises (high intensity) jumps tend to arrive in clusters, whereas there are typically longer times between jumps when the markets are calmer. We moreover observe more variability in jump sizes when the intensity is high, than when it is low.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا