ترغب بنشر مسار تعليمي؟ اضغط هنا

High Temporal Resolution Rainfall Runoff Modelling Using Long-Short-Term-Memory (LSTM) Networks

79   0   0.0 ( 0 )
 نشر من قبل Wei Li
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Accurate and efficient models for rainfall runoff (RR) simulations are crucial for flood risk management. Most rainfall models in use today are process-driven; i.e. they solve either simplified empirical formulas or some variation of the St. Venant (shallow water) equations. With the development of machine-learning techniques, we may now be able to emulate rainfall models using, for example, neural networks. In this study, a data-driven RR model using a sequence-to-sequence Long-short-Term-Memory (LSTM) network was constructed. The model was tested for a watershed in Houston, TX, known for severe flood events. The LSTM networks capability in learning long-term dependencies between the input and output of the network allowed modeling RR with high resolution in time (15 minutes). Using 10-years precipitation from 153 rainfall gages and river channel discharge data (more than 5.3 million data points), and by designing several numerical tests the developed model performance in predicting river discharge was tested. The model results were also compared with the output of a process-driven model Gridded Surface Subsurface Hydrologic Analysis (GSSHA). Moreover, physical consistency of the LSTM model was explored. The model results showed that the LSTM model was able to efficiently predict discharge and achieve good model performance. When compared to GSSHA, the data-driven model was more efficient and robust in terms of prediction and calibration. Interestingly, the performance of the LSTM model improved (test Nash-Sutcliffe model efficiency from 0.666 to 0.942) when a selected subset of rainfall gages based on the model performance, were used as input instead of all rainfall gages.

قيم البحث

اقرأ أيضاً

Objectives: Atrial fibrillation (AF) is a common heart rhythm disorder associated with deadly and debilitating consequences including heart failure, stroke, poor mental health, reduced quality of life and death. Having an automatic system that diagno ses various types of cardiac arrhythmias would assist cardiologists to initiate appropriate preventive measures and to improve the analysis of cardiac disease. To this end, this paper introduces a new approach to detect and classify automatically cardiac arrhythmias in electrocardiograms (ECG) recordings. Methods: The proposed approach used a combination of Convolution Neural Networks (CNNs) and a sequence of Long Short-Term Memory (LSTM) units, with pooling, dropout and normalization techniques to improve their accuracy. The network predicted a classification at every 18th input sample and we selected the final prediction for classification. Results were cross-validated on the Physionet Challenge 2017 training dataset, which contains 8,528 single lead ECG recordings lasting from 9s to just over 60s. Results: Using the proposed structure and no explicit feature selection, 10-fold stratified cross-validation gave an overall F-measure of 0.83.10-0.015 on the held-out test data (mean-standard deviation over all folds) and 0.80 on the hidden dataset of the Challenge entry server.
109 - Zihao Wang , Zhifei Xu , Jiayi He 2020
In this work we propose a neuromorphic hardware based signal equalizer by based on the deep learning implementation. The proposed neural equalizer is plasticity trainable equalizer which is different from traditional model designed based DFE. A train able Long Short-Term memory neural network based DFE architecture is proposed for signal recovering and digital implementation is evaluated through FPGA implementation. Constructing with modelling based equalization methods, the proposed approach is compatible to multiple frequency signal equalization instead of single type signal equalization. We shows quantitatively that the neuronmorphic equalizer which is amenable both analog and digital implementation outperforms in different metrics in comparison with benchmarks approaches. The proposed method is adaptable both for general neuromorphic computing or ASIC instruments.
Financial trading is at the forefront of time-series analysis, and has grown hand-in-hand with it. The advent of electronic trading has allowed complex machine learning solutions to enter the field of financial trading. Financial markets have both lo ng term and short term signals and thus a good predictive model in financial trading should be able to incorporate them together. One of the most sought after forms of electronic trading is high-frequency trading (HFT), typically known for microsecond sensitive changes, which results in a tremendous amount of data. LSTMs are one of the most capable variants of the RNN family that can handle long-term dependencies, but even they are not equipped to handle such long sequences of the order of thousands of data points like in HFT. We propose very-long short term memory networks, or VLSTMs, to deal with such extreme length sequences. We explore the importance of VLSTMs in the context of HFT. We compare our model on publicly available dataset and got a 3.14% increase in F1-score over the existing state-of-the-art time-series forecasting models. We also show that our model has great parallelization potential, which is essential for practical purposes when trading on such markets.
Road surface friction significantly impacts traffic safety and mobility. A precise road surface friction prediction model can help to alleviate the influence of inclement road conditions on traffic safety, Level of Service, traffic mobility, fuel eff iciency, and sustained economic productivity. Most related previous studies are laboratory-based methods that are difficult for practical implementation. Moreover, in other data-driven methods, the demonstrated time-series features of road surface conditions have not been considered. This study employed a Long-Short Term Memory (LSTM) neural network to develop a data-driven road surface friction prediction model based on historical data. The proposed prediction model outperformed the other baseline models in terms of the lowest value of predictive performance measurements. The influence of the number of time-lags and the predicting time interval on predictive accuracy was analyzed. In addition, the influence of adding road surface water thickness, road surface temperature and air temperature on predictive accuracy also were investigated. The findings of this study can support road maintenance strategy development and decision making, thus mitigating the impact of inclement road conditions on traffic mobility and safety. Future work includes a modified LSTM-based prediction model development by accommodating flexible time intervals between time-lags.
We propose a method using a long short-term memory (LSTM) network to estimate the noise power spectral density (PSD) of single-channel audio signals represented in the short time Fourier transform (STFT) domain. An LSTM network common to all frequenc y bands is trained, which processes each frequency band individually by mapping the noisy STFT magnitude sequence to its corresponding noise PSD sequence. Unlike deep-learning-based speech enhancement methods that learn the full-band spectral structure of speech segments, the proposed method exploits the sub-band STFT magnitude evolution of noise with a long time dependency, in the spirit of the unsupervised noise estimators described in the literature. Speaker- and speech-independent experiments with different types of noise show that the proposed method outperforms the unsupervised estimators, and generalizes well to noise types that are not present in the training set.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا