ترغب بنشر مسار تعليمي؟ اضغط هنا

Comparison of Machine Learning Methods for Predicting Karst Spring Discharge in North China

231   0   0.0 ( 0 )
 نشر من قبل Shu Cheng
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

The quantitative analyses of karst spring discharge typically rely on physical-based models, which are inherently uncertain. To improve the understanding of the mechanism of spring discharge fluctuation and the relationship between precipitation and spring discharge, three machine learning methods were developed to reduce the predictive errors of physical-based groundwater models, simulate the discharge of Longzici Springs karst area, and predict changes in the spring on the basis of long time series precipitation monitoring and spring water flow data from 1987 to 2018. The three machine learning methods included two artificial neural networks (ANNs), namely, multilayer perceptron (MLP) and long short-term memory-recurrent neural network (LSTM-RNN), and support vector regression (SVR). A normalization method was introduced for data preprocessing to make the three methods robust and computationally efficient. To compare and evaluate the capability of the three machine learning methods, the mean squared error (MSE), mean absolute error (MAE), and root-mean-square error (RMSE) were selected as the performance metrics for these methods. Simulations showed that MLP reduced MSE, MAE, and RMSE to 0.0010, 0.0254, and 0.0318, respectively. Meanwhile, LSTM-RNN reduced MSE to 0.0010, MAE to 0.0272, and RMSE to 0.0329. Moreover, the decrease in MSE, MAE, and RMSE were 0.0910, 0.1852, and 0.3017, respectively, for SVR. Results indicated that MLP performed slightly better than LSTM-RNN, and MLP and LSTM-RNN performed considerably better than SVR. Furthermore, ANNs were demonstrated to be prior machine learning methods for simulating and predicting karst spring discharge.

قيم البحث

اقرأ أيضاً

Objective: Current resuscitation protocols require pausing chest compressions during cardiopulmonary resuscitation (CPR) to check for a pulse. However, pausing CPR during a pulseless rhythm can worsen patient outcome. Our objective is to design an EC G-based algorithm that predicts pulse status during uninterrupted CPR and evaluate its performance. Methods: We evaluated 383 patients being treated for out-of-hospital cardiac arrest using defibrillator data. We collected paired and immediately adjacent ECG segments having an organized rhythm. Segments were collected during the 10s period of ongoing CPR prior to a pulse check, and 5s segments without CPR during the pulse check. ECG segments with or without a pulse were identified by the audio annotation of a paramedics pulse check findings and recorded blood pressures. We developed an algorithm to predict the clinical pulse status based on the wavelet transform of the bandpass-filtered ECG, applying principle component analysis. We then trained a linear discriminant model using 3 principle component modes. Model performance was evaluated on test group segments with and without CPR using receiver operating curves and according to the initial arrest rhythm. Results: There were 230 patients (540 pulse checks) in the training set and 153 patients (372 pulse checks) in the test set. Overall 38% (351/912) of checks had a spontaneous pulse. The areas under the receiver operating characteristic curve (AUCs) for predicting pulse status with and without CPR on test data were 0.84 and 0.89, respectively. Conclusion: A novel ECG-based algorithm demonstrates potential to improve resuscitation by predicting presence of a spontaneous pulse without pausing CPR. Significance: Our algorithm predicts pulse status during uninterrupted CPR, allowing for CPR to proceed unimpeded by pauses to check for a pulse and potentially improving resuscitation performance.
We present a comparison of several Difference Image Analysis (DIA) techniques, in combination with Machine Learning (ML) algorithms, applied to the identification of optical transients associated with gravitational wave events. Each technique is asse ssed based on the scoring metrics of Precision, Recall, and their harmonic mean F1, measured on the DIA results as standalone techniques, and also in the results after the application of ML algorithms, on transient source injections over simulated and real data. This simulations cover a wide range of instrumental configurations, as well as a variety of scenarios of observation conditions, by exploring a multi dimensional set of relevant parameters, allowing us to extract general conclusions related to the identification of transient astrophysical events. The newest subtraction techniques, and particularly the methodology published in Zackay et al. (2016) are implemented in an Open Source Python package, named properimage, suitable for many other astronomical image analyses. This together with the ML libraries we describe, provides an effective transient detection software pipeline. Here we study the effects of the different ML techniques, and the relative feature importances for classification of transient candidates, and propose an optimal combined strategy. This constitutes the basic elements of pipelines that could be applied in searches of electromagnetic counterparts to GW sources.
Postprocessing ensemble weather predictions to correct systematic errors has become a standard practice in research and operations. However, only few recent studies have focused on ensemble postprocessing of wind gust forecasts, despite its importanc e for severe weather warnings. Here, we provide a comprehensive review and systematic comparison of eight statistical and machine learning methods for probabilistic wind gust forecasting via ensemble postprocessing, that can be divided in three groups: State of the art postprocessing techniques from statistics (ensemble model output statistics (EMOS), member-by-member postprocessing, isotonic distributional regression), established machine learning methods (gradient-boosting extended EMOS, quantile regression forests) and neural network-based approaches (distributional regression network, Bernstein quantile network, histogram estimation network). The methods are systematically compared using six years of data from a high-resolution, convection-permitting ensemble prediction system that was run operationally at the German weather service, and hourly observations at 175 surface weather stations in Germany. While all postprocessing methods yield calibrated forecasts and are able to correct the systematic errors of the raw ensemble predictions, incorporating information from additional meteorological predictor variables beyond wind gusts leads to significant improvements in forecast skill. In particular, we propose a flexible framework of locally adaptive neural networks with different probabilistic forecast types as output, which not only significantly outperform all benchmark postprocessing methods but also learn physically consistent relations associated with the diurnal cycle, especially the evening transition of the planetary boundary layer.
Understanding and removing bias from the decisions made by machine learning models is essential to avoid discrimination against unprivileged groups. Despite recent progress in algorithmic fairness, there is still no clear answer as to which bias-miti gation approaches are most effective. Evaluation strategies are typically use-case specific, rely on data with unclear bias, and employ a fixed policy to convert model outputs to decision outcomes. To address these problems, we performed a systematic comparison of a number of popular fairness algorithms applicable to supervised classification. Our study is the most comprehensive of its kind. It utilizes three real and four synthetic datasets, and two different ways of converting model outputs to decisions. It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines, corresponding to both fairness-unaware and fairness-aware algorithms. We found that fairness-unaware algorithms typically fail to produce adequately fair models and that the simplest algorithms are not necessarily the fairest ones. We also found that fairness-aware algorithms can induce fairness without material drops in predictive power. Finally, we found that dataset idiosyncracies (e.g., degree of intrinsic unfairness, nature of correlations) do affect the performance of fairness-aware approaches. Our results allow the practitioner to narrow down the approach(es) they would like to adopt without having to know in advance their fairness requirements.
As data generation increasingly takes place on devices without a wired connection, Machine Learning over wireless networks becomes critical. Many studies have shown that traditional wireless protocols are highly inefficient or unsustainable to suppor t Distributed Machine Learning. This is creating the need for new wireless communication methods. In this survey, we give an exhaustive review of the state of the art wireless methods that are specifically designed to support Machine Learning services. Namely, over-the-air computation and radio resource allocation optimized for Machine Learning. In the over-the-air approach, multiple devices communicate simultaneously over the same time slot and frequency band to exploit the superposition property of wireless channels for gradient averaging over-the-air. In radio resource allocation optimized for Machine Learning, Active Learning metrics allow for data evaluation to greatly optimize the assignment of radio resources. This paper gives a comprehensive introduction to these methods, reviews the most important works, and highlights crucial open problems.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا