No Arabic abstract
Recent evidence has shown that structural magnetic resonance imaging (MRI) is an effective tool for Alzheimers disease (AD) prediction and diagnosis. While traditional MRI-based diagnosis uses images acquired at a single time point, a longitudinal study is more sensitive and accurate in detecting early pathological changes of the AD. Two main difficulties arise in longitudinal MRI-based diagnosis: (1) the inconsistent longitudinal scans among subjects (i.e., different scanning time and different total number of scans); (2) the heterogeneous progressions of high-dimensional regions of interest (ROIs) in MRI. In this work, we propose a novel feature selection and estimation method which can be applied to extract features from the heterogeneous longitudinal MRI. A key ingredient of our method is the combination of smoothing splines and the $l_1$-penalty. We perform experiments on the Alzheimers Disease Neuroimaging Initiative (ADNI) database. The results corroborate the advantages of the proposed method for AD prediction in longitudinal studies.
Alzheimers disease is the most common cause of dementia. It is the fifth-leading cause of death among elderly people. With high genetic heritability (79%), finding disease causal genes is a crucial step in find treatment for AD. Following the International Genomics of Alzheimers Project (IGAP), many disease-associated genes have been identified; however, we dont have enough knowledge about how those disease-associated genes affect gene expression and disease-related pathways. We integrated GWAS summary data from IGAP and five different expression level data by using TWAS method and identified 15 disease causal genes under strict multiple testing (alpha<0.05), 4 genes are newly identified; identified additional 29 potential disease causal genes under false discovery rate(alpha < 0.05), 21 of them are newly identified. Many genes we identified are also associated with some autoimmune disorder.
Two key challenges in modern statistical applications are the large amount of information recorded per individual, and that such data are often not collected all at once but in batches. These batch effects can be complex, causing distortions in both mean and variance. We propose a novel sparse latent factor regression model to integrate such heterogeneous data. The model provides a tool for data exploration via dimensionality reduction while correcting for a range of batch effects. We study the use of several sparse priors (local and non-local) to learn the dimension of the latent factors. Our model is fitted in a deterministic fashion by means of an EM algorithm for which we derive closed-form updates, contributing a novel scalable algorithm for non-local priors of interest beyond the immediate scope of this paper. We present several examples, with a focus on bioinformatics applications. Our results show an increase in the accuracy of the dimensionality reduction, with non-local priors substantially improving the reconstruction of factor cardinality, as well as the need to account for batch effects to obtain reliable results. Our model provides a novel approach to latent factor regression that balances sparsity with sensitivity and is highly computationally efficient.
Accurate diagnosis of Alzheimers Disease (AD) entails clinical evaluation of multiple cognition metrics and biomarkers. Metrics such as the Alzheimers Disease Assessment Scale - Cognitive test (ADAS-cog) comprise multiple subscores that quantify different aspects of a patients cognitive state such as learning, memory, and language production/comprehension. Although computer-aided diagnostic techniques for classification of a patients current disease state exist, they provide little insight into the relationship between changes in brain structure and different aspects of a patients cognitive state that occur over time in AD. We have developed a Convolutional Neural Network architecture that can concurrently predict the trajectories of the 13 subscores comprised by a subjects ADAS-cog examination results from a current minimally preprocessed structural MRI scan up to 36 months from image acquisition time without resorting to manual feature extraction. Mean performance metrics are within range of those of existing techniques that require manual feature selection and are limited to predicting aggregate scores.
We present the findings of The Alzheimers Disease Prediction Of Longitudinal Evolution (TADPOLE) Challenge, which compared the performance of 92 algorithms from 33 international teams at predicting the future trajectory of 219 individuals at risk of Alzheimers disease. Challenge participants were required to make a prediction, for each month of a 5-year future time period, of three key outcomes: clinical diagnosis, Alzheimers Disease Assessment Scale Cognitive Subdomain (ADAS-Cog13), and total volume of the ventricles. No single submission was best at predicting all three outcomes. For clinical diagnosis and ventricle volume prediction, the best algorithms strongly outperform simple baselines in predictive ability. However, for ADAS-Cog13 no single submitted prediction method was significantly better than random guessing. Two ensemble methods based on taking the mean and median over all predictions, obtained top scores on almost all tasks. Better than average performance at diagnosis prediction was generally associated with the additional inclusion of features from cerebrospinal fluid (CSF) samples and diffusion tensor imaging (DTI). On the other hand, better performance at ventricle volume prediction was associated with inclusion of summary statistics, such as patient-specific biomarker trends. The submission system remains open via the website https://tadpole.grand-challenge.org, while code for submissions is being collated by TADPOLE SHARE: https://tadpole-share.github.io/. Our work suggests that current prediction algorithms are accurate for biomarkers related to clinical diagnosis and ventricle volume, opening up the possibility of cohort refinement in clinical trials for Alzheimers disease.
In this article we derive an unbiased expression for the expected mean-squared error associated with continuously differentiable estimators of the noncentrality parameter of a chi-square random variable. We then consider the task of denoising squared-magnitude magnetic resonance image data, which are well modeled as independent noncentral chi-square random variables on two degrees of freedom. We consider two broad classes of linearly parameterized shrinkage estimators that can be optimized using our risk estimate, one in the general context of undecimated filterbank transforms, and another in the specific case of the unnormalized Haar wavelet transform. The resultant algorithms are computationally tractable and improve upon state-of-the-art methods for both simulated and actual magnetic resonance image data.