Do you want to publish a course? Click here

Selective recruitment designs for improving observational studies using electronic health records

91   0   0.0 ( 0 )
 Added by James Barrett
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Large scale electronic health records (EHRs) present an opportunity to quickly identify suitable individuals in order to directly invite them to participate in an observational study. EHRs can contain data from millions of individuals, raising the question of how to optimally select a cohort of size n from a larger pool of size N. In this paper we propose a simple selective recruitment protocol that selects a cohort in which covariates of interest tend to have a uniform distribution. We show that selectively recruited cohorts potentially offer greater statistical power and more accurate parameter estimates than randomly selected cohorts. Our protocol can be applied to studies with multiple categorical and continuous covariates. We apply our protocol to a numerically simulated prospective observational study using an EHR database of stable acute coronary disease patients from 82,089 individuals in the U.K. Selective recruitment designs require a smaller sample size, leading to more efficient and cost-effective studies.



rate research

Read More

Electronic Health Records (EHRs) are typically stored as time-stamped encounter records. Observing temporal relationship between medical records is an integral part of interpreting the information. Hence, statistical analysis of EHRs requires that clinically informed time-interdependent analysis variables (TIAV) be created. Often, formulation and creation of these variables are iterative and requiring custom codes. We describe a technique of using sequences of time-referenced entities as the building blocks for TIAVs. These sequences represent different aspects of patients medical history in a contiguous fashion. To illustrate the principles and applications of the method, we provide examples using Veterans Health Administrations research databases. In the first example, sequences representing medication exposure were used to assess patient selection criteria for a treatment comparative effectiveness study. In the second example, sequences of Charlson Comorbidity conditions and clinical settings of inpatient or outpatient were used to create variables with which data anomalies and trends were revealed. The third example demonstrated the creation of an analysis variable derived from the temporal dependency of medication exposure and comorbidity. Complex time-interdependent analysis variables can be created from the sequences with simple, reusable codes, hence enable unscripted or automation of TIAV creation.
Today, despite decades of developments in medicine and the growing interest in precision healthcare, vast majority of diagnoses happen once patients begin to show noticeable signs of illness. Early indication and detection of diseases, however, can provide patients and carers with the chance of early intervention, better disease management, and efficient allocation of healthcare resources. The latest developments in machine learning (more specifically, deep learning) provides a great opportunity to address this unmet need. In this study, we introduce BEHRT: A deep neural sequence transduction model for EHR (electronic health records), capable of multitask prediction and disease trajectory mapping. When trained and evaluated on the data from nearly 1.6 million individuals, BEHRT shows a striking absolute improvement of 8.0-10.8%, in terms of Average Precision Score, compared to the existing state-of-the-art deep EHR models (in terms of average precision, when predicting for the onset of 301 conditions). In addition to its superior prediction power, BEHRT provides a personalised view of disease trajectories through its attention mechanism; its flexible architecture enables it to incorporate multiple heterogeneous concepts (e.g., diagnosis, medication, measurements, and more) to improve the accuracy of its predictions; and its (pre-)training results in disease and patient representations that can help us get a step closer to interpretable predictions.
If Electronic Health Records contain a large amount of information about the patients condition and response to treatment, which can potentially revolutionize the clinical practice, such information is seldom considered due to the complexity of its extraction and analysis. We here report on a first integration of an NLP framework for the analysis of clinical records of lung cancer patients making use of a telephone assistance service of a major Spanish hospital. We specifically show how some relevant data, about patient demographics and health condition, can be extracted; and how some relevant analyses can be performed, aimed at improving the usefulness of the service. We thus demonstrate that the use of EHR texts, and their integration inside a data analysis framework, is technically feasible and worth of further study.
466 - James E. Barrett 2015
We propose a novel adaptive design for clinical trials with time-to-event outcomes and covariates (which may consist of or include biomarkers). Our method is based on the expected entropy of the posterior distribution of a proportional hazards model. The expected entropy is evaluated as a function of a patients covariates, and the information gained due to a patient is defined as the decrease in the corresponding entropy. Candidate patients are only recruited onto the trial if they are likely to provide sufficient information. Patients with covariates that are deemed uninformative are filtered out. A special case is where all patients are recruited, and we determine the optimal treatment arm allocation. This adaptive design has the advantage of potentially elucidating the relationship between covariates, treatments, and survival probabilities using fewer patients, albeit at the cost of rejecting some candidates. We assess the performance of our adaptive design using data from the German Breast Cancer Study group and numerical simulations of a biomarker validation trial.
Recurrent Neural Networks (RNNs) are often used for sequential modeling of adverse outcomes in electronic health records (EHRs) due to their ability to encode past clinical states. These deep, recurrent architectures have displayed increased performance compared to other modeling approaches in a number of tasks, fueling the interest in deploying deep models in clinical settings. One of the key elements in ensuring safe model deployment and building user trust is model explainability. Testing with Concept Activation Vectors (TCAV) has recently been introduced as a way of providing human-understandable explanations by comparing high-level concepts to the networks gradients. While the technique has shown promising results in real-world imaging applications, it has not been applied to structured temporal inputs. To enable an application of TCAV to sequential predictions in the EHR, we propose an extension of the method to time series data. We evaluate the proposed approach on an open EHR benchmark from the intensive care unit, as well as synthetic data where we are able to better isolate individual effects.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا