ترغب بنشر مسار تعليمي؟ اضغط هنا

Predicting Rate of Cognitive Decline at Baseline Using a Deep Neural Network with Multidata Analysis

69   0   0.0 ( 0 )
 نشر من قبل Sema Candemir
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Purpose: This study investigates whether a machine-learning-based system can predict the rate of cognitive decline in mildly cognitively impaired patients by processing only the clinical and imaging data collected at the initial visit. Approach: We built a predictive model based on a supervised hybrid neural network utilizing a 3-Dimensional Convolutional Neural Network to perform volume analysis of Magnetic Resonance Imaging and integration of non-imaging clinical data at the fully connected layer of the architecture. The experiments are conducted on the Alzheimers Disease Neuroimaging Initiative dataset. Results: Experimental results confirm that there is a correlation between cognitive decline and the data obtained at the first visit. The system achieved an area under the receiver operator curve (AUC) of 0.70 for cognitive decline class prediction. Conclusion: To our knowledge, this is the first study that predicts slowly deteriorating/stable or rapidly deteriorating classes by processing routinely collected baseline clinical and demographic data (Baseline MRI, Baseline MMSE, Scalar Volumetric data, Age, Gender, Education, Ethnicity, and Race). The training data is built based on MMSE-rate values. Unlike the studies in the literature that focus on predicting Mild Cognitive Impairment-to-Alzheimer`s disease conversion and disease classification, we approach the problem as an early prediction of cognitive decline rate in MCI patients.



قيم البحث

اقرأ أيضاً

Background:Cognitive assessments represent the most common clinical routine for the diagnosis of Alzheimers Disease (AD). Given a large number of cognitive assessment tools and time-limited office visits, it is important to determine a proper set of cognitive tests for different subjects. Most current studies create guidelines of cognitive test selection for a targeted population, but they are not customized for each individual subject. In this manuscript, we develop a machine learning paradigm enabling personalized cognitive assessments prioritization. Method: We adapt a newly developed learning-to-rank approach PLTR to implement our paradigm. This method learns the latent scoring function that pushes the most effective cognitive assessments onto the top of the prioritization list. We also extend PLTR to better separate the most effective cognitive assessments and the less effective ones. Results: Our empirical study on the ADNI data shows that the proposed paradigm outperforms the state-of-the-art baselines on identifying and prioritizing individual-specific cognitive biomarkers. We conduct experiments in cross validation and level-out validation settings. In the two settings, our paradigm significantly outperforms the best baselines with improvement as much as 22.1% and 19.7%, respectively, on prioritizing cognitive features. Conclusions: The proposed paradigm achieves superior performance on prioritizing cognitive biomarkers. The cognitive biomarkers prioritized on top have great potentials to facilitate personalized diagnosis, disease subtyping, and ultimately precision medicine in AD.
Recent development of quantitative myocardial blood flow (MBF) mapping allows direct evaluation of absolute myocardial perfusion, by computing pixel-wise flow maps. Clinical studies suggest quantitative evaluation would be more desirable for objectiv ity and efficiency. Objective assessment can be further facilitated by segmenting the myocardium and automatically generating reports following the AHA model. This will free user interaction for analysis and lead to a one-click solution to improve workflow. This paper proposes a deep neural network based computational workflow for inline myocardial perfusion analysis. Adenosine stress and rest perfusion scans were acquired from three hospitals. Training set included N=1,825 perfusion series from 1,034 patients. Independent test set included 200 scans from 105 patients. Data were consecutively acquired at each site. A convolution neural net (CNN) model was trained to provide segmentation for LV cavity, myocardium and right ventricular by processing incoming 2D+T perfusion Gd series. Model outputs were compared to manual ground-truth for accuracy of segmentation and flow measures derived on global and per-sector basis. The trained models were integrated onto MR scanners for effective inference. Segmentation accuracy and myocardial flow measures were compared between CNN models and manual ground-truth. The mean Dice ratio of CNN derived myocardium was 0.93 +/- 0.04. Both global flow and per-sector values showed no significant difference, compared to manual results. The AHA 16 segment model was automatically generated and reported on the MR scanner. As a result, the fully automated analysis of perfusion flow mapping was achieved. This solution was integrated on the MR scanner, enabling one-click analysis and reporting of myocardial blood flow.
Biopolymer gels, such as those made out of fibrin or collagen, are widely used in tissue engineering applications and biomedical research. Moreover, fibrin naturally assembles into gels in vivo during wound healing and thrombus formation. Macroscale biopolymer gel mechanics are dictated by the microscale fiber network. Hence, accurate description of biopolymer gels can be achieved using representative volume elements (RVE) that explicitly model the discrete fiber networks of the microscale. These RVE models, however, cannot be efficiently used to model the macroscale due to the challenges and computational demands of multiscale coupling. Here, we propose the use of an artificial, fully connected neural network (FCNN) to efficiently capture the behavior of the RVE models. The FCNN was trained on 1100 fiber networks subjected to 121 biaxial deformations. The stress data from the RVE, together with the total energy and the condition of incompressibility of the surrounding matrix, were used to determine the derivatives of an unknown strain energy function with respect to the deformation invariants. During training, the loss function was modified to ensure convexity of the strain energy function and symmetry of its Hessian. A general FCNN model was coded into a user material subroutine (UMAT) in the software Abaqus. In this work, the FCNN trained on the discrete fiber network data was used in finite element simulations of fibrin gels using our UMAT. We anticipate that this work will enable further integration of machine learning tools with computational mechanics. It will also improve computational modeling of biological materials characterized by a multiscale structure.
Complex biological functions are carried out by the interaction of genes and proteins. Uncovering the gene regulation network behind a function is one of the central themes in biology. Typically, it involves extensive experiments of genetics, biochem istry and molecular biology. In this paper, we show that much of the inference task can be accomplished by a deep neural network (DNN), a form of machine learning or artificial intelligence. Specifically, the DNN learns from the dynamics of the gene expression. The learnt DNN behaves like an accurate simulator of the system, on which one can perform in-silico experiments to reveal the underlying gene network. We demonstrate the method with two examples: biochemical adaptation and the gap-gene patterning in fruit fly embryogenesis. In the first example, the DNN can successfully find the two basic network motifs for adaptation - the negative feedback and the incoherent feed-forward. In the second and much more complex example, the DNN can accurately predict behaviors of essentially all the mutants. Furthermore, the regulation network it uncovers is strikingly similar to the one inferred from experiments. In doing so, we develop methods for deciphering the gene regulation network hidden in the DNN black box. Our interpretable DNN approach should have broad applications in genotype-phenotype mapping.
Amid the pandemic of 2019 novel coronavirus disease (COVID-19) infected by SARS-CoV-2, a vast amount of drug research for prevention and treatment has been quickly conducted, but these efforts have been unsuccessful thus far. Our objective is to prio ritize repurposable drugs using a drug repurposing pipeline that systematically integrates multiple SARS-CoV-2 and drug interactions, deep graph neural networks, and in-vitro/population-based validations. We first collected all the available drugs (n= 3,635) involved in COVID-19 patient treatment through CTDbase. We built a SARS-CoV-2 knowledge graph based on the interactions among virus baits, host genes, pathways, drugs, and phenotypes. A deep graph neural network approach was used to derive the candidate representation based on the biological interactions. We prioritized the candidate drugs using clinical trial history, and then validated them with their genetic profiles, in vitro experimental efficacy, and electronic health records. We highlight the top 22 drugs including Azithromycin, Atorvastatin, Aspirin, Acetaminophen, and Albuterol. We further pinpointed drug combinations that may synergistically target COVID-19. In summary, we demonstrated that the integration of extensive interactions, deep neural networks, and rigorous validation can facilitate the rapid identification of candidate drugs for COVID-19 treatment.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا