ترغب بنشر مسار تعليمي؟ اضغط هنا

Image Processing Tools for Financial Time Series Classification

62   0   0.0 ( 0 )
 نشر من قبل Bairui Du
 تاريخ النشر 2020
  مجال البحث مالية
والبحث باللغة English




اسأل ChatGPT حول البحث

The application of deep learning to time series forecasting is one of the major challenges in present machine learning. We propose a novel methodology that combines machine learning and image processing methods to define and predict market states with intraday financial data. A wavelet transform is applied to the log-return of stock prices for both image extraction and denoising. A convolutional neural network then extracts patterns from denoised wavelet images to classify daily time series, i.e. a market state is associated with the binary prediction of the daily close price movement based on the wavelet image constructed from the price changes in the first hours of the day. This method overcomes the low signal-to-noise ratio problem in financial time series and gets a competitive prediction accuracy of the market states Up and Down of financial data as tested on the S&P 500.


قيم البحث

اقرأ أيضاً

Data augmentation methods in combination with deep neural networks have been used extensively in computer vision on classification tasks, achieving great success; however, their use in time series classification is still at an early stage. This is ev en more so in the field of financial prediction, where data tends to be small, noisy and non-stationary. In this paper we evaluate several augmentation methods applied to stocks datasets using two state-of-the-art deep learning models. The results show that several augmentation methods significantly improve financial performance when used in combination with a trading strategy. For a relatively small dataset ($approx30K$ samples), augmentation methods achieve up to $400%$ improvement in risk adjusted return performance; for a larger stock dataset ($approx300K$ samples), results show up to $40%$ improvement.
Deep Learning (DL) models can be used to tackle time series analysis tasks with great success. However, the performance of DL models can degenerate rapidly if the data are not appropriately normalized. This issue is even more apparent when DL is used for financial time series forecasting tasks, where the non-stationary and multimodal nature of the data pose significant challenges and severely affect the performance of DL models. In this work, a simple, yet effective, neural layer, that is capable of adaptively normalizing the input time series, while taking into account the distribution of the data, is proposed. The proposed layer is trained in an end-to-end fashion using back-propagation and leads to significant performance improvements compared to other evaluated normalization schemes. The proposed method differs from traditional normalization methods since it learns how to perform normalization for a given task instead of using a fixed normalization scheme. At the same time, it can be directly applied to any new time series without requiring re-training. The effectiveness of the proposed method is demonstrated using a large-scale limit order book dataset, as well as a load forecasting dataset.
Financial time series have been investigated to follow fat-tailed distributions. Further, an empirical probability distribution sometimes shows cut-off shapes on its tails. To describe this stylized fact, we incorporate the cut-off effect in supersta tistics. Then we confirm that the presented stochastic model is capable of describing the statistical properties of real financial time series. In addition, we present an option pricing formula with respect to superstatistics.
A well-interpretable measure of information has been recently proposed based on a partition obtained by intersecting a random sequence with its moving average. The partition yields disjoint sets of the sequence, which are then ranked according to the ir size to form a probability distribution function and finally fed in the expression of the Shannon entropy. In this work, such entropy measure is implemented on the time series of prices and volatilities of six financial markets. The analysis has been performed, on tick-by-tick data sampled every minute for six years of data from 1999 to 2004, for a broad range of moving average windows and volatility horizons. The study shows that the entropy of the volatility series depends on the individual market, while the entropy of the price series is practically a market-invariant for the six markets. Finally, a cumulative information measure - the `Market Heterogeneity Index- is derived from the integral of the proposed entropy measure. The values of the Market Heterogeneity Index are discussed as possible tools for optimal portfolio construction and compared with those obtained by using the Sharpe ratio a traditional risk diversity measure.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا