Do you want to publish a course? Click here

Predicting Clinical Deterioration of Outpatients Using Multimodal Data Collected by Wearables

96   0   0.0 ( 0 )
 Added by Dingwen Li
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Hospital readmission rate is high for heart failure patients. Early detection of deterioration will help doctors prevent readmissions, thus reducing health care cost and providing patients with just-in-time intervention. Wearable devices (e.g., wristbands and smart watches) provide a convenient technology for continuous outpatient monitoring. In the paper, we explore the feasibility of monitoring outpatients using Fitbit Charge HR wristbands and the potential of machine learning models to predicting clinical deterioration (readmissions and death) among outpatients discharged from the hospital. We developed and piloted a data collection system in a clinical study which involved 25 heart failure patients recently discharged from a hospital. The results from the clinical study demonstrated the feasibility of continuously monitoring outpatients using wristbands. We observed high levels of patient compliance in wearing the wristbands regularly and satisfactory yield, latency and reliability of data collection from the wristbands to a cloud-based database. Finally, we explored a set of machine learning models to predict deterioration based on the Fitbit data. Through 5-fold cross validation, K nearest neighbor achieved the highest accuracy of 0.8800 for identifying patients at risk of deterioration using the health data from the beginning of the monitoring. Machine learning models based on multimodal data (step, sleep and heart rate) significantly outperformed the traditional clinical approach based on LACE index. Moreover, our proposed weighted samples one class SVM model can reach high accuracy (0.9635) for predicting the deterioration happening in the future using data collected by a sliding window, which indicates the potential for allowing timely intervention.



rate research

Read More

Parkinsons Disease is a neurological disorder and prevalent in elderly people. Traditional ways to diagnose the disease rely on in-person subjective clinical evaluations on the quality of a set of activity tests. The high-resolution longitudinal activity data collected by smartphone applications nowadays make it possible to conduct remote and convenient health assessment. However, out-of-lab tests often suffer from poor quality controls as well as irregularly collected observations, leading to noisy test results. To address these issues, we propose a novel time-series based approach to predicting Parkinsons Disease with raw activity test data collected by smartphones in the wild. The proposed method first synchronizes discrete activity tests into multimodal features at unified time points. Next, it distills and enriches local and global representations from noisy data across modalities and temporal observations by two attention modules. With the proposed mechanisms, our model is capable of handling noisy observations and at the same time extracting refined temporal features for improved prediction performance. Quantitative and qualitative results on a large public dataset demonstrate the effectiveness of the proposed approach.
Data-driven decision making is serving and transforming education. We approached the problem of predicting students performance by using multiple data sources which came from online courses, including one we created. Experimental results show preliminary conclusions towards which data are to be considered for the task.
Clinical trials provide essential guidance for practicing Evidence-Based Medicine, though often accompanying with unendurable costs and risks. To optimize the design of clinical trials, we introduce a novel Clinical Trial Result Prediction (CTRP) task. In the CTRP framework, a model takes a PICO-formatted clinical trial proposal with its background as input and predicts the result, i.e. how the Intervention group compares with the Comparison group in terms of the measured Outcome in the studied Population. While structured clinical evidence is prohibitively expensive for manual collection, we exploit large-scale unstructured sentences from medical literature that implicitly contain PICOs and results as evidence. Specifically, we pre-train a model to predict the disentangled results from such implicit evidence and fine-tune the model with limited data on the downstream datasets. Experiments on the benchmark Evidence Integration dataset show that the proposed model outperforms the baselines by large margins, e.g., with a 10.7% relative gain over BioBERT in macro-F1. Moreover, the performance improvement is also validated on another dataset composed of clinical trials related to COVID-19.
Secure and scalable data sharing is essential for collaborative clinical decision making. Conventional clinical data efforts are often siloed, however, which creates barriers to efficient information exchange and impedes effective treatment decision made for patients. This paper provides four contributions to the study of applying blockchain technology to clinical data sharing in the context of technical requirements defined in the Shared Nationwide Interoperability Roadmap from the Office of the National Coordinator for Health Information Technology (ONC). First, we analyze the ONC requirements and their implications for blockchain-based systems. Second, we present FHIRChain, which is a blockchain-based architecture designed to meet ONC requirements by encapsulating the HL7 Fast Healthcare Interoperability Resources (FHIR) standard for shared clinical data. Third, we demonstrate a FHIRChain-based decentralized app using digital health identities to authenticate participants in a case study of collaborative decision making for remote cancer care. Fourth, we highlight key lessons learned from our case study.
Although increasingly used as a data resource for assembling cohorts, electronic health records (EHRs) pose many analytic challenges. In particular, a patients health status influences when and what data are recorded, generating sampling bias in the collected data. In this paper, we consider recurrent event analysis using EHR data. Conventional regression methods for event risk analysis usually require the values of covariates to be observed throughout the follow-up period. In EHR databases, time-dependent covariates are intermittently measured during clinical visits, and the timing of these visits is informative in the sense that it depends on the disease course. Simple methods, such as the last-observation-carried-forward approach, can lead to biased estimation. On the other hand, complex joint models require additional assumptions on the covariate process and cannot be easily extended to handle multiple longitudinal predictors. By incorporating sampling weights derived from estimating the observation time process, we develop a novel estimation procedure based on inverse-rate-weighting and kernel-smoothing for the semiparametric proportional rate model of recurrent events. The proposed methods do not require model specifications for the covariate processes and can easily handle multiple time-dependent covariates. Our methods are applied to a kidney transplant study for illustration.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا