ترغب بنشر مسار تعليمي؟ اضغط هنا

VLSTM: Very Long Short-Term Memory Networks for High-Frequency Trading

129   0   0.0 ( 0 )
 نشر من قبل Prakhar Ganesh
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Financial trading is at the forefront of time-series analysis, and has grown hand-in-hand with it. The advent of electronic trading has allowed complex machine learning solutions to enter the field of financial trading. Financial markets have both long term and short term signals and thus a good predictive model in financial trading should be able to incorporate them together. One of the most sought after forms of electronic trading is high-frequency trading (HFT), typically known for microsecond sensitive changes, which results in a tremendous amount of data. LSTMs are one of the most capable variants of the RNN family that can handle long-term dependencies, but even they are not equipped to handle such long sequences of the order of thousands of data points like in HFT. We propose very-long short term memory networks, or VLSTMs, to deal with such extreme length sequences. We explore the importance of VLSTMs in the context of HFT. We compare our model on publicly available dataset and got a 3.14% increase in F1-score over the existing state-of-the-art time-series forecasting models. We also show that our model has great parallelization potential, which is essential for practical purposes when trading on such markets.



قيم البحث

اقرأ أيضاً

We demonstrate how machine learning is able to model experiments in quantum physics. Quantum entanglement is a cornerstone for upcoming quantum technologies such as quantum computation and quantum cryptography. Of particular interest are complex quan tum states with more than two particles and a large number of entangled quantum levels. Given such a multiparticle high-dimensional quantum state, it is usually impossible to reconstruct an experimental setup that produces it. To search for interesting experiments, one thus has to randomly create millions of setups on a computer and calculate the respective output states. In this work, we show that machine learning models can provide significant improvement over random search. We demonstrate that a long short-term memory (LSTM) neural network can successfully learn to model quantum experiments by correctly predicting output state characteristics for given setups without the necessity of computing the states themselves. This approach not only allows for faster search but is also an essential step towards automated design of multiparticle high-dimensional quantum experiments using generative machine learning models.
Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant ( shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM networks capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.
Detecting and intercepting malicious requests are one of the most widely used ways against attacks in the network security. Most existing detecting approaches, including matching blacklist characters and machine learning algorithms have all shown to be vulnerable to sophisticated attacks. To address the above issues, a more general and rigorous detection method is required. In this paper, we formulate the problem of detecting malicious requests as a temporal sequence classification problem, and propose a novel deep learning model namely Convolutional Neural Network-Bidirectional Long Short-term Memory-Convolutional Neural Network (CNN-BiLSTM-CNN). By connecting the shadow and deep feature maps of the convolutional layers, the malicious feature extracting ability is improved on more detailed functionality. Experimental results on HTTP dataset CSIC 2010 have demonstrated the effectiveness of the proposed method when compared with the state-of-the-arts.
85 - Thomas E. Portegys 2021
This study compares the modularity performance of two artificial neural network architectures: a Long Short-Term Memory (LSTM) recurrent network, and Morphognosis, a neural network based on a hierarchy of spatial and temporal contexts. Mazes are used to measure performance, defined as the ability to utilize independently learned mazes to solve mazes composed of them. A maze is a sequence of rooms connected by doors. The modular task is implemented as follows: at the beginning of the maze, an initial door choice forms a context that must be retained until the end of an intervening maze, where the same door must be chosen again to reach the goal. For testing, the door-association mazes and separately trained intervening mazes are presented together for the first time. While both neural networks perform well during training, the testing performance of Morphognosis is significantly better than LSTM on this modular task.
Recent breakthroughs in recurrent deep neural networks with long short-term memory (LSTM) units has led to major advances in artificial intelligence. State-of-the-art LSTM models with significantly increased complexity and a large number of parameter s, however, have a bottleneck in computing power resulting from limited memory capacity and data communication bandwidth. Here we demonstrate experimentally that LSTM can be implemented with a memristor crossbar, which has a small circuit footprint to store a large number of parameters and in-memory computing capability that circumvents the von Neumann bottleneck. We illustrate the capability of our system by solving real-world problems in regression and classification, which shows that memristor LSTM is a promising low-power and low-latency hardware platform for edge inference.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا