Do you want to publish a course? Click here

Electronic Health Record Phenotyping with Internally Assessable Performance (PhIAP) using Anchor-Positive and Unlabeled Patients

70   0   0.0 ( 0 )
 Added by Lingjiao Zhang
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Building phenotype models using electronic health record (EHR) data conventionally requires manually labeled cases and controls. Assigning labels is labor intensive and, for some phenotypes, identifying gold-standard controls is prohibitive. To facilitate comprehensive clinical decision support and research, we sought to develop an accurate EHR phenotyping approach that assesses its performance without a validation set. Our framework relies on specifying a random subset of cases, potentially using an anchor variable that has excellent positive predictive value and sensitivity that is independent of predictors. We developed a novel maximum likelihood approach that efficiently leverages data from anchor-positive and unlabeled patients to develop logistic regression phenotyping models. Additionally, we described novel statistical methods for estimating phenotyping prevalence and assessing model calibration and predictive performance measures. Theoretical and simulation studies indicated our method generates accurate predicted probabilities, leading to excellent discrimination and calibration, and consistent estimates of phenotype prevalence and anchor sensitivity. The method appeared robust to minor lack-of-fit and the proposed calibration assessment detected major lack-of-fit. We applied our method to EHR data to develop a preliminary model for identifying patients with primary aldosteronism, which achieved an AUC of 0.99 and PPV of 0.8. We developed novel statistical methods for accurate model development and validation with minimal manual labeling, facilitating development of scalable, transferable, semi-automated case labeling and practice-specific models. Our EHR phenotyping approach decreases labor-intensive manual phenotyping and annotation, which should enable broader model development and dissemination for EHR clinical decision support and research.



rate research

Read More

Electronic Health Record (EHR) data has been of tremendous utility in Artificial Intelligence (AI) for healthcare such as predicting future clinical events. These tasks, however, often come with many challenges when using classical machine learning models due to a myriad of factors including class imbalance and data heterogeneity (i.e., the complex intra-class variances). To address some of these research gaps, this paper leverages the exciting contrastive learning framework and proposes a novel contrastive regularized clinical classification model. The contrastive loss is found to substantially augment EHR-based prediction: it effectively characterizes the similar/dissimilar patterns (by its push-and-pull form), meanwhile mitigating the highly skewed class distribution by learning more balanced feature spaces (as also echoed by recent findings). In particular, when naively exporting the contrastive learning to the EHR data, one hurdle is in generating positive samples, since EHR data is not as amendable to data augmentation as image data. To this end, we have introduced two unique positive sampling strategies specifically tailored for EHR data: a feature-based positive sampling that exploits the feature space neighborhood structure to reinforce the feature learning; and an attribute-based positive sampling that incorporates pre-generated patient similarity metrics to define the sample proximity. Both sampling approaches are designed with an awareness of unique high intra-class variance in EHR data. Our overall framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data with a total of 5,712 patients admitted to a large, urban health system. Specifically, our method reaches a high AUROC prediction score of 0.959, which outperforms other baselines and alternatives: cross-entropy(0.873) and focal loss(0.931).
If Electronic Health Records contain a large amount of information about the patients condition and response to treatment, which can potentially revolutionize the clinical practice, such information is seldom considered due to the complexity of its extraction and analysis. We here report on a first integration of an NLP framework for the analysis of clinical records of lung cancer patients making use of a telephone assistance service of a major Spanish hospital. We specifically show how some relevant data, about patient demographics and health condition, can be extracted; and how some relevant analyses can be performed, aimed at improving the usefulness of the service. We thus demonstrate that the use of EHR texts, and their integration inside a data analysis framework, is technically feasible and worth of further study.
Modelling disease progression of iron deficiency anaemia (IDA) following oral iron supplement prescriptions is a prerequisite for evaluating the cost-effectiveness of oral iron supplements. Electronic health records (EHRs) from the Clinical Practice Research Datalink (CPRD) provide rich longitudinal data on IDA disease progression in patients registered with 663 General Practitioner (GP) practices in the UK, but they also create challenges in statistical analyses. First, the CPRD data are clustered at multi-levels (i.e., GP practices and patients), but their large volume makes it computationally difficult to implement estimation of standard random effects models for multi-level data. Second, observation times in the CPRD data are irregular and could be informative about the disease progression. For example, shorter/longer gap times between GP visits could be associated with deteriorating/improving IDA. Existing methods to address informative observation times are mostly based on complex joint models, which adds more computational burden. To tackle these challenges, we develop a computationally efficient approach to modelling disease progression with EHRs data while accounting for variability at multi-level clusters and informative observation times. We apply the proposed method to the CPRD data to investigate IDA improvement and treatment intolerance following oral iron prescriptions in primary care of the UK.
276 - Duncan Lee , Gavin Shaddick 2012
The relationship between short-term exposure to air pollution and mortality or morbidity has been the subject of much recent research, in which the standard method of analysis uses Poisson linear or additive models. In this paper we use a Bayesian dynamic generalised linear model (DGLM) to estimate this relationship, which allows the standard linear or additive model to be extended in two ways: (i) the long-term trend and temporal correlation present in the health data can be modelled by an autoregressive process rather than a smooth function of calendar time; (ii) the effects of air pollution are allowed to evolve over time. The efficacy of these two extensions are investigated by applying a series of dynamic and non-dynamic models to air pollution and mortality data from Greater London. A Bayesian approach is taken throughout, and a Markov chain monte carlo simulation algorithm is presented for inference. An alternative likelihood based analysis is also presented, in order to allow a direct comparison with the only previous analysis of air pollution and health data using a DGLM.
Electronic Health Records (EHRs) are typically stored as time-stamped encounter records. Observing temporal relationship between medical records is an integral part of interpreting the information. Hence, statistical analysis of EHRs requires that clinically informed time-interdependent analysis variables (TIAV) be created. Often, formulation and creation of these variables are iterative and requiring custom codes. We describe a technique of using sequences of time-referenced entities as the building blocks for TIAVs. These sequences represent different aspects of patients medical history in a contiguous fashion. To illustrate the principles and applications of the method, we provide examples using Veterans Health Administrations research databases. In the first example, sequences representing medication exposure were used to assess patient selection criteria for a treatment comparative effectiveness study. In the second example, sequences of Charlson Comorbidity conditions and clinical settings of inpatient or outpatient were used to create variables with which data anomalies and trends were revealed. The third example demonstrated the creation of an analysis variable derived from the temporal dependency of medication exposure and comorbidity. Complex time-interdependent analysis variables can be created from the sequences with simple, reusable codes, hence enable unscripted or automation of TIAV creation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا