Do you want to publish a course? Click here

Machine Learning and Feature Engineering for Predicting Pulse Status during Chest Compressions

110   0   0.0 ( 0 )
 Added by Diya Sashidhar
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Objective: Current resuscitation protocols require pausing chest compressions during cardiopulmonary resuscitation (CPR) to check for a pulse. However, pausing CPR during a pulseless rhythm can worsen patient outcome. Our objective is to design an ECG-based algorithm that predicts pulse status during uninterrupted CPR and evaluate its performance. Methods: We evaluated 383 patients being treated for out-of-hospital cardiac arrest using defibrillator data. We collected paired and immediately adjacent ECG segments having an organized rhythm. Segments were collected during the 10s period of ongoing CPR prior to a pulse check, and 5s segments without CPR during the pulse check. ECG segments with or without a pulse were identified by the audio annotation of a paramedics pulse check findings and recorded blood pressures. We developed an algorithm to predict the clinical pulse status based on the wavelet transform of the bandpass-filtered ECG, applying principle component analysis. We then trained a linear discriminant model using 3 principle component modes. Model performance was evaluated on test group segments with and without CPR using receiver operating curves and according to the initial arrest rhythm. Results: There were 230 patients (540 pulse checks) in the training set and 153 patients (372 pulse checks) in the test set. Overall 38% (351/912) of checks had a spontaneous pulse. The areas under the receiver operating characteristic curve (AUCs) for predicting pulse status with and without CPR on test data were 0.84 and 0.89, respectively. Conclusion: A novel ECG-based algorithm demonstrates potential to improve resuscitation by predicting presence of a spontaneous pulse without pausing CPR. Significance: Our algorithm predicts pulse status during uninterrupted CPR, allowing for CPR to proceed unimpeded by pauses to check for a pulse and potentially improving resuscitation performance.



rate research

Read More

The quantitative analyses of karst spring discharge typically rely on physical-based models, which are inherently uncertain. To improve the understanding of the mechanism of spring discharge fluctuation and the relationship between precipitation and spring discharge, three machine learning methods were developed to reduce the predictive errors of physical-based groundwater models, simulate the discharge of Longzici Springs karst area, and predict changes in the spring on the basis of long time series precipitation monitoring and spring water flow data from 1987 to 2018. The three machine learning methods included two artificial neural networks (ANNs), namely, multilayer perceptron (MLP) and long short-term memory-recurrent neural network (LSTM-RNN), and support vector regression (SVR). A normalization method was introduced for data preprocessing to make the three methods robust and computationally efficient. To compare and evaluate the capability of the three machine learning methods, the mean squared error (MSE), mean absolute error (MAE), and root-mean-square error (RMSE) were selected as the performance metrics for these methods. Simulations showed that MLP reduced MSE, MAE, and RMSE to 0.0010, 0.0254, and 0.0318, respectively. Meanwhile, LSTM-RNN reduced MSE to 0.0010, MAE to 0.0272, and RMSE to 0.0329. Moreover, the decrease in MSE, MAE, and RMSE were 0.0910, 0.1852, and 0.3017, respectively, for SVR. Results indicated that MLP performed slightly better than LSTM-RNN, and MLP and LSTM-RNN performed considerably better than SVR. Furthermore, ANNs were demonstrated to be prior machine learning methods for simulating and predicting karst spring discharge.
Machine learning (ML) pervades an increasing number of academic disciplines and industries. Its impact is profound, and several fields have been fundamentally altered by it, autonomy and computer vision for example; reliability engineering and safety will undoubtedly follow suit. There is already a large but fragmented literature on ML for reliability and safety applications, and it can be overwhelming to navigate and integrate into a coherent whole. In this work, we facilitate this task by providing a synthesis of, and a roadmap to this ever-expanding analytical landscape and highlighting its major landmarks and pathways. We first provide an overview of the different ML categories and sub-categories or tasks, and we note several of the corresponding models and algorithms. We then look back and review the use of ML in reliability and safety applications. We examine several publications in each category/sub-category, and we include a short discussion on the use of Deep Learning to highlight its growing popularity and distinctive advantages. Finally, we look ahead and outline several promising future opportunities for leveraging ML in service of advancing reliability and safety considerations. Overall, we argue that ML is capable of providing novel insights and opportunities to solve important challenges in reliability and safety applications. It is also capable of teasing out more accurate insights from accident datasets than with traditional analysis tools, and this in turn can lead to better informed decision-making and more effective accident prevention.
Datasets are rarely a realistic approximation of the target population. Say, prevalence is misrepresented, image quality is above clinical standards, etc. This mismatch is known as sampling bias. Sampling biases are a major hindrance for machine learning models. They cause significant gaps between model performance in the lab and in the real world. Our work is a solution to prevalence bias. Prevalence bias is the discrepancy between the prevalence of a pathology and its sampling rate in the training dataset, introduced upon collecting data or due to the practioner rebalancing the training batches. This paper lays the theoretical and computational framework for training models, and for prediction, in the presence of prevalence bias. Concretely a bias-corrected loss function, as well as bias-corrected predictive rules, are derived under the principles of Bayesian risk minimization. The loss exhibits a direct connection to the information gain. It offers a principled alternative to heuristic training losses and complements test-time procedures based on selecting an operating point from summary curves. It integrates seamlessly in the current paradigm of (deep) learning using stochastic backpropagation and naturally with Bayesian models.
Dynamic malware analysis executes the program in an isolated environment and monitors its run-time behaviour (e.g. system API calls) for malware detection. This technique has been proven to be effective against various code obfuscation techniques and newly released (zero-day) malware. However, existing works typically only consider the API name while ignoring the arguments, or require complex feature engineering operations and expert knowledge to process the arguments. In this paper, we propose a novel and low-cost feature extraction approach, and an effective deep neural network architecture for accurate and fast malware detection. Specifically, the feature representation approach utilizes a feature hashing trick to encode the API call arguments associated with the API name. The deep neural network architecture applies multiple Gated-CNNs (convolutional neural networks) to transform the extracted features of each API call. The outputs are further processed through bidirectional LSTM (long-short term memory networks) to learn the sequential correlation among API calls. Experiments show that our solution outperforms baselines significantly on a large real dataset. Valuable insights about feature engineering and architecture design are derived from the ablation study.
Making binary decisions is a common data analytical task in scientific research and industrial applications. In data sciences, there are two related but distinct strategies: hypothesis testing and binary classification. In practice, how to choose between these two strategies can be unclear and rather confusing. Here we summarize key distinctions between these two strategies in three aspects and list five practical guidelines for data analysts to choose the appropriate strategy for specific analysis needs. We demonstrate the use of those guidelines in a cancer driver gene prediction example.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا