Do you want to publish a course? Click here

Development and validation of computable Phenotype to Identify and Characterize Kidney Health in Adult Hospitalized Patients

107   0   0.0 ( 0 )
 Added by Azra Bihorac
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Background: Acute kidney injury (AKI) is a common complication in hospitalized patients and a common cause for chronic kidney disease (CKD) and increased hospital cost and mortality. By timely detection of AKI and AKI progression, effective preventive or therapeutic measures could be offered. This study aims to develop and validate an electronic phenotype to identify patients with CKD and AKI. Methods: A database with electronic health records data from a retrospective study cohort of 84,352 hospitalized adults was created. This repository includes demographics, comorbidities, vital signs, laboratory values, medications, diagnoses and procedure codes for all index admission, 12 months prior and 12 months follow-up encounters. We developed algorithms to identify CKD and AKI based on the Kidney Disease: Improving Global Outcomes (KDIGO) criteria. To measure diagnostic performance of the algorithms, clinician experts performed clinical adjudication of AKI and CKD on 300 selected cases. Results: Among 149,136 encounters, identified CKD by medical history was 12% which increased to 16% using creatinine criteria. Among 130,081 encounters with sufficient data for AKI phenotyping 21% had AKI. The comparison of CKD phenotyping algorithm to manual chart review yielded PPV of 0.87, NPV of 0.99, sensitivity of 0.99, and specificity of 0.89. The comparison of AKI phenotyping algorithm to manual chart review yielded PPV of 0.99, NPV of 0.95 , sensitivity 0.98, and specificity 0.98. Conclusions: We developed phenotyping algorithms that yielded very good performance in identification of patients with CKD and AKI in validation cohort. This tool may be useful in identifying patients with kidney disease in a large population, in assessing the quality and value of care in such patients.



rate research

Read More

100 - Yuanfeng Ren 2020
Background: In the United States, 5.7 million patients are admitted annually to intensive care units (ICU), with costs exceeding $82 billion. Although close monitoring and dynamic assessment of patient acuity are key aspects of ICU care, both are limited by the time constraints imposed on healthcare providers. Methods: Using the University of Florida Health (UFH) Integrated Data Repository as Honest Broker, we created a database with electronic health records data from a retrospective study cohort of 38,749 adult patients admitted to ICU at UF Health between 06/01/2014 and 08/22/2019. This repository includes demographic information, comorbidities, vital signs, laboratory values, medications with date and timestamps, and diagnoses and procedure codes for all index admission encounters as well as encounters within 12 months prior to index admission and 12 months follow-up. We developed algorithms to identify acuity status of the patient every four hours during each ICU stay. Results: We had 383,193 encounters (121,800 unique patients) admitted to the hospital, and 51,073 encounters (38,749 unique patients) with at least one ICU stay that lasted more than four hours. These patients requiring ICU admission had longer median hospital stay (7 days vs. 1 day) and higher in-hospital mortality (9.6% vs. 0.4%) compared with those not admitted to the ICU. Among patients who were admitted to the ICU and expired during hospital admission, more deaths occurred in the ICU than on general hospital wards (7.4% vs. 0.8%, respectively). Conclusions: We developed phenotyping algorithms that determined patient acuity status every four hours while admitted to the ICU. This approach may be useful in developing prognostic and clinical decision-support tools to aid patients, caregivers, and providers in shared decision-making processes regarding resource use and escalation of care.
Building phenotype models using electronic health record (EHR) data conventionally requires manually labeled cases and controls. Assigning labels is labor intensive and, for some phenotypes, identifying gold-standard controls is prohibitive. To facilitate comprehensive clinical decision support and research, we sought to develop an accurate EHR phenotyping approach that assesses its performance without a validation set. Our framework relies on specifying a random subset of cases, potentially using an anchor variable that has excellent positive predictive value and sensitivity that is independent of predictors. We developed a novel maximum likelihood approach that efficiently leverages data from anchor-positive and unlabeled patients to develop logistic regression phenotyping models. Additionally, we described novel statistical methods for estimating phenotyping prevalence and assessing model calibration and predictive performance measures. Theoretical and simulation studies indicated our method generates accurate predicted probabilities, leading to excellent discrimination and calibration, and consistent estimates of phenotype prevalence and anchor sensitivity. The method appeared robust to minor lack-of-fit and the proposed calibration assessment detected major lack-of-fit. We applied our method to EHR data to develop a preliminary model for identifying patients with primary aldosteronism, which achieved an AUC of 0.99 and PPV of 0.8. We developed novel statistical methods for accurate model development and validation with minimal manual labeling, facilitating development of scalable, transferable, semi-automated case labeling and practice-specific models. Our EHR phenotyping approach decreases labor-intensive manual phenotyping and annotation, which should enable broader model development and dissemination for EHR clinical decision support and research.
Currently, Chronic Kidney Disease (CKD) is experiencing a globally increasing incidence and high cost to health systems. A delayed recognition implies premature mortality due to progressive loss of kidney function. The employment of data mining to discover subtle patterns in CKD indicators would contribute achieving early diagnosis. This work presents the development and evaluation of an explainable prediction model that would support clinicians in the early diagnosis of CKD patients. The model development is based on a data management pipeline that detects the best combination of ensemble trees algorithms and features selected concerning classification performance. The results obtained through the pipeline equals the performance of the best CKD prediction models identified in the literature. Furthermore, the main contribution of the paper involves an explainability-driven approach that allows selecting the best prediction model maintaining a balance between accuracy and explainability. Therefore, the most balanced explainable prediction model of CKD implements an XGBoost classifier over a group of 4 features (packed cell value, specific gravity, albumin, and hypertension), achieving an accuracy of 98.9% and 97.5% with cross-validation technique and with new unseen data respectively. In addition, by analysing the models explainability by means of different post-hoc techniques, the packed cell value and the specific gravity are determined as the most relevant features that influence the prediction results of the model. This small number of feature selected results in a reduced cost of the early diagnosis of CKD implying a promising solution for developing countries.
105 - Adam Davey , Ting Dai 2020
Methods for addressing missing data have become much more accessible to applied researchers. However, little guidance exists to help researchers systematically identify plausible missing data mechanisms in order to ensure that these methods are appropriately applied. Two considerations motivate the present study. First, psychological research is typically characterized by a large number of potential response variables that may be observed across multiple waves of data collection. This situation makes it more challenging to identify plausible missing data mechanisms than is the case in other fields such as biostatistics where a small number of dependent variables is typically of primary interest and the main predictor of interest is statistically independent of other covariates. Second, there is growing recognition of the importance of systematic approaches to sensitivity analyses for treatment of missing data in psychological science. We develop and apply a systematic approach for reducing a large number of observed patterns and demonstrate how these can be used to explore potential missing data mechanisms within multivariate contexts. A large scale simulation study is used to guide suggestions for which approaches are likely to be most accurate as a function of sample size, number of factors, number of indicators per factor, and proportion of missing data. Three applications of this approach to data examples suggest that the method appears useful in practice.
Understanding how disks dissipate is essential to studies of planet formation. However, identifying exactly how dust and gas dissipates is complicated due to difficulty in finding objects clearly in the transition of losing their surrounding material. We use Spitzer IRS spectra to examine 35 photometrically-selected candidate cold disks (disks with large inner dust holes). The infrared spectra are supplemented with optical spectra to determine stellar and accretion properties and 1.3mm photometry to measure disk masses. Based on detailed SED modeling, we identify 15 new cold disks. The remaining 20 objects have IRS spectra that are consistent with disks without holes, disks that are observed close to edge-on, or stars with background emission. Based on these results, we determine reliable criteria for identifying disks with inner holes from Spitzer photometry and examine criteria already in the literature. Applying these criteria to the c2d surveyed star-forming regions gives a frequency of such objects of at least 4% and most likely of order 12% of the YSO population identified by Spitzer. We also examine the properties of these new cold disks in combination with cold disks from the literature. Hole sizes in this sample are generally smaller than for previously discovered disks and reflect a distribution in better agreement with exoplanet orbit radii. We find correlations between hole size and both disk and stellar masses. Silicate features, including crystalline features, are present in the overwhelming majority of the sample although 10 micron feature strength above the continuum declines for holes with radii larger than ~7 AU. In contrast, PAHs are only detected in 2 out of 15 sources. Only a quarter of the cold disk sample shows no signs of accretion, making it unlikely that photoevaporation is the dominant hole forming process in most cases.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا