Do you want to publish a course? Click here

ML-based Flood Forecasting: Advances in Scale, Accuracy and Reach

88   0   0.0 ( 0 )
 Added by Grey Nearing
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Floods are among the most common and deadly natural disasters in the world, and flood warning systems have been shown to be effective in reducing harm. Yet the majority of the worlds vulnerable population does not have access to reliable and actionable warning systems, due to core challenges in scalability, computational costs, and data availability. In this paper we present two components of flood forecasting systems which were developed over the past year, providing access to these critical systems to 75 million people who didnt have this access before.

rate research

Read More

Effective riverine flood forecasting at scale is hindered by a multitude of factors, most notably the need to rely on human calibration in current methodology, the limited amount of data for a specific location, and the computational difficulty of building continent/global level models that are sufficiently accurate. Machine learning (ML) is primed to be useful in this scenario: learned models often surpass human experts in complex high-dimensional scenarios, and the framework of transfer or multitask learning is an appealing solution for leveraging local signals to achieve improved global performance. We propose to build on these strengths and develop ML systems for timely and accurate riverine flood prediction.
In this work, we track the lineage of tuples throughout their database lifetime. That is, we consider a scenario in which tuples (records) that are produced by a query may affect other tuple insertions into the DB, as part of a normal workflow. As time goes on, exact provenance explanations for such tuples become deeply nested, increasingly consuming space, and resulting in decreased clarity and readability. We present a novel approach for approximating lineage tracking, using a Machine Learning (ML) and Natural Language Processing (NLP) technique; namely, word embedding. The basic idea is summarizing (and approximating) the lineage of each tuple via a small set of constant-size vectors (the number of vectors per-tuple is a hyperparameter). Therefore, our solution does not suffer from space complexity blow-up over time, and it naturally ranks explanations to the existence of a tuple. We devise an alternative and improved lineage tracking mechanism, that of keeping track of and querying lineage at the column level; thereby, we manage to better distinguish between the provenance features and the textual characteristics of a tuple. We integrate our lineage computations into the PostgreSQL system via an extension (ProvSQL) and experimentally exhibit useful results in terms of accuracy against exact, semiring-based, justifications. In the experiments, we focus on tuples with multiple generations of tuples in their lifelong lineage and analyze them in terms of direct and distant lineage. The experiments suggest a high usefulness potential for the proposed approximate lineage methods and the further suggested enhancements. This especially holds for the column-based vectors method which exhibits high precision and high per-level recall.
Accurately forecasting Arctic sea ice from subseasonal to seasonal scales has been a major scientific effort with fundamental challenges at play. In addition to physics-based earth system models, researchers have been applying multiple statistical and machine learning models for sea ice forecasting. Looking at the potential of data-driven sea ice forecasting, we propose an attention-based Long Short Term Memory (LSTM) ensemble method to predict monthly sea ice extent up to 1 month ahead. Using daily and monthly satellite retrieved sea ice data from NSIDC and atmospheric and oceanic variables from ERA5 reanalysis product for 39 years, we show that our multi-temporal ensemble method outperforms several baseline and recently proposed deep learning models. This will substantially improve our ability in predicting future Arctic sea ice changes, which is fundamental for forecasting transporting routes, resource development, coastal erosion, threats to Arctic coastal communities and wildlife.
Accurate short range weather forecasting has significant implications for various sectors. Machine learning based approaches, e.g., deep learning, have gained popularity in this domain where the existing numerical weather prediction (NWP) models still have modest skill after a few days. Here we use a ConvLSTM network to develop a deep learning model for precipitation forecasting. The crux of the idea is to develop a forecasting model which involves convolution based feature selection and uses long term memory in the meteorological fields in conjunction with gradient based learning algorithm. Prior to using the input data, we explore various techniques to overcome dataset difficulties. We follow a strategic approach to deal with missing values and discuss the models fidelity to capture realistic precipitation. The model resolution used is (25 km). A comparison between 5 years of predicted data and corresponding observational records for 2 days lead time forecast show correlation coefficients of 0.67 and 0.42 for lead day 1 and 2 respectively. The patterns indicate higher correlation over the Western Ghats and Monsoon trough region (0.8 and 0.6 for lead day 1 and 2 respectively). Further, the model performance is evaluated based on skill scores, Mean Square Error, correlation coefficient and ROC curves. This study demonstrates that the adopted deep learning approach based only on a single precipitation variable, has a reasonable skill in the short range. Incorporating multivariable based deep learning has the potential to match or even better the short range precipitation forecasts based on the state of the art NWP models.
145 - S. Weyers 2018
Improvements of the systematic uncertainty, frequency instability, and long-term reliability of the two caesium fountain primary frequency standards CSF1 and CSF2 at PTB (Physikalisch-Technische Bundesanstalt) are described. We have further investigated many of the systematic effects and made a number of modifications of the fountains. With an optically stabilized microwave oscillator, the quantum projection noise limited frequency instabilities are improved to $7.2 times 10^{-14} (tau/1,mathrm{s})^{-1/2}$ for CSF1 and $2.5 times 10^{-14} (tau/1,mathrm{s})^{-1/2}$ for CSF2 at high atom density. The systematic uncertainties of CSF1 and CSF2 are reduced to $2.74 times 10^{-16}$ and $1.71 times 10^{-16}$, respectively. Both fountain clocks regularly calibrate the scale unit of International Atomic Time (TAI) and the local realization of Coordinated Universal Time, UTC(PTB), and serve as references to measure the frequencies of local and remote optical frequency standards.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا