No Arabic abstract
We conducted a survey of 67 graduate students enrolled in the Privacy and Security in Healthcare course at Indiana University Purdue University Indianapolis. This was done to measure user preference and their understanding of usability and security of three different Electronic Health Records authentication methods: single authentication method (username and password), Single sign-on with Central Authentication Service (CAS) authentication method, and a bio-capsule facial authentication method. This research aims to explore the relationship between security and usability, and measure the effect of perceived security on usability in these three aforementioned authentication methods. We developed a formative-formative Partial Least Square Structural Equation Modeling (PLS-SEM) model to measure the relationship between the latent variables of Usability, and Security. The measurement model was developed using five observed variables (measures). - Efficiency and Effectiveness, Satisfaction, Preference, Concerns, and Confidence. The results obtained highlight the importance and impact of these measures on the latent variables and the relationship among the latent variables. From the PLS-SEM analysis, it was found that security has a positive impact on usability for Single sign-on and bio-capsule facial authentication methods. We conclude that the facial authentication method was the most secure and usable among the three authentication methods. Further, descriptive analysis was done to draw out the interesting findings from the survey regarding the observed variables.
Today, despite decades of developments in medicine and the growing interest in precision healthcare, vast majority of diagnoses happen once patients begin to show noticeable signs of illness. Early indication and detection of diseases, however, can provide patients and carers with the chance of early intervention, better disease management, and efficient allocation of healthcare resources. The latest developments in machine learning (more specifically, deep learning) provides a great opportunity to address this unmet need. In this study, we introduce BEHRT: A deep neural sequence transduction model for EHR (electronic health records), capable of multitask prediction and disease trajectory mapping. When trained and evaluated on the data from nearly 1.6 million individuals, BEHRT shows a striking absolute improvement of 8.0-10.8%, in terms of Average Precision Score, compared to the existing state-of-the-art deep EHR models (in terms of average precision, when predicting for the onset of 301 conditions). In addition to its superior prediction power, BEHRT provides a personalised view of disease trajectories through its attention mechanism; its flexible architecture enables it to incorporate multiple heterogeneous concepts (e.g., diagnosis, medication, measurements, and more) to improve the accuracy of its predictions; and its (pre-)training results in disease and patient representations that can help us get a step closer to interpretable predictions.
Locimetric authentication is a form of graphical authentication in which users validate their identity by selecting predetermined points on a predetermined image. Its primary advantage over the ubiquitous text-based approach stems from users superior ability to remember visual information over textual information, coupled with the authentication process being transformed to one requiring recognition (instead of recall). Ideally, these differentiations enable users to create more complex passwords, which theoretically are more secure. Yet locimetric authentication has one significant weakness: hot-spots. This term refers to areas of an image that users gravitate towards, and which consequently have a higher probability of being selected. Although many strategies have been proposed to counter the hot-spot problem, one area that has received little attention is that of resolution. The hypothesis here is that high-resolution images would afford the user a larger password space, and consequently any hot-spots would dissipate. We employ an experimental approach, where users generate a series of locimetric passwords on either low- or high-resolution images. Our research reveals the presence of hot-spots even in high-resolution images, albeit at a lower level than that exhibited with low-resolution images. We conclude by reinforcing that other techniques - such as existing or new software controls or training - need to be utilized to mitigate the emergence of hot-spots with the locimetric scheme.
An estimated 180 papers focusing on deep learning and EHR were published between 2010 and 2018. Despite the common workflow structure appearing in these publications, no trusted and verified software framework exists, forcing researchers to arduously repeat previous work. In this paper, we propose Cardea, an extensible open-source automated machine learning framework encapsulating common prediction problems in the health domain and allows users to build predictive models with their own data. This system relies on two components: Fast Healthcare Interoperability Resources (FHIR) -- a standardized data structure for electronic health systems -- and several AUTOML frameworks for automated feature engineering, model selection, and tuning. We augment these components with an adaptive data assembler and comprehensive data- and model- auditing capabilities. We demonstrate our framework via 5 prediction tasks on MIMIC-III and Kaggle datasets, which highlight Cardeas human competitiveness, flexibility in problem definition, extensive feature generation capability, adaptable automatic data assembler, and its usability.
The use of collaborative and decentralized machine learning techniques such as federated learning have the potential to enable the development and deployment of clinical risk predictions models in low-resource settings without requiring sensitive data be shared or stored in a central repository. This process necessitates communication of model weights or updates between collaborating entities, but it is unclear to what extent patient privacy is compromised as a result. To gain insight into this question, we study the efficacy of centralized versus federated learning in both private and non-private settings. The clinical prediction tasks we consider are the prediction of prolonged length of stay and in-hospital mortality across thirty one hospitals in the eICU Collaborative Research Database. We find that while it is straightforward to apply differentially private stochastic gradient descent to achieve strong privacy bounds when training in a centralized setting, it is considerably more difficult to do so in the federated setting.
Predictive modeling with electronic health record (EHR) data is anticipated to drive personalized medicine and improve healthcare quality. Constructing predictive statistical models typically requires extraction of curated predictor variables from normalized EHR data, a labor-intensive process that discards the vast majority of information in each patients record. We propose a representation of patients entire, raw EHR records based on the Fast Healthcare Interoperability Resources (FHIR) format. We demonstrate that deep learning methods using this representation are capable of accurately predicting multiple medical events from multiple centers without site-specific data harmonization. We validated our approach using de-identified EHR data from two U.S. academic medical centers with 216,221 adult patients hospitalized for at least 24 hours. In the sequential format we propose, this volume of EHR data unrolled into a total of 46,864,534,945 data points, including clinical notes. Deep learning models achieved high accuracy for tasks such as predicting in-hospital mortality (AUROC across sites 0.93-0.94), 30-day unplanned readmission (AUROC 0.75-0.76), prolonged length of stay (AUROC 0.85-0.86), and all of a patients final discharge diagnoses (frequency-weighted AUROC 0.90). These models outperformed state-of-the-art traditional predictive models in all cases. We also present a case-study of a neural-network attribution system, which illustrates how clinicians can gain some transparency into the predictions. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios, complete with explanations that directly highlight evidence in the patients chart.