ترغب بنشر مسار تعليمي؟ اضغط هنا

Neuroimaging Modality Fusion in Alzheimers Classification Using Convolutional Neural Networks

79   0   0.0 ( 0 )
 نشر من قبل Arjun Punjabi
 تاريخ النشر 2018
والبحث باللغة English




اسأل ChatGPT حول البحث

Automated methods for Alzheimers disease (AD) classification have the potential for great clinical benefits and may provide insight for combating the disease. Machine learning, and more specifically deep neural networks, have been shown to have great efficacy in this domain. These algorithms often use neurological imaging data such as MRI and PET, but a comprehensive and balanced comparison of these modalities has not been performed. In order to accurately determine the relative strength of each imaging variant, this work performs a comparison study in the context of Alzheimers dementia classification using the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset. Furthermore, this work analyzes the benefits of using both modalities in a fusion setting and discusses how these data types may be leveraged in future AD studies using deep learning.

قيم البحث

اقرأ أيضاً

Steady-State Visual Evoked Potentials (SSVEPs) are neural oscillations from the parietal and occipital regions of the brain that are evoked from flickering visual stimuli. SSVEPs are robust signals measurable in the electroencephalogram (EEG) and are commonly used in brain-computer interfaces (BCIs). However, methods for high-accuracy decoding of SSVEPs usually require hand-crafted approaches that leverage domain-specific knowledge of the stimulus signals, such as specific temporal frequencies in the visual stimuli and their relative spatial arrangement. When this knowledge is unavailable, such as when SSVEP signals are acquired asynchronously, such approaches tend to fail. In this paper, we show how a compact convolutional neural network (Compact-CNN), which only requires raw EEG signals for automatic feature extraction, can be used to decode signals from a 12-class SSVEP dataset without the need for any domain-specific knowledge or calibration data. We report across subject mean accuracy of approximately 80% (chance being 8.3%) and show this is substantially better than current state-of-the-art hand-crafted approaches using canonical correlation analysis (CCA) and Combined-CCA. Furthermore, we analyze our Compact-CNN to examine the underlying feature representation, discovering that the deep learner extracts additional phase and amplitude related features associated with the structure of the dataset. We discuss how our Compact-CNN shows promise for BCI applications that allow users to freely gaze/attend to any stimulus at any time (e.g., asynchronous BCI) as well as provides a method for analyzing SSVEP signals in a way that might augment our understanding about the basic processing in the visual cortex.
Three major biomarkers: beta-amyloid (A), pathologic tau (T), and neurodegeneration (N), are recognized as valid proxies for neuropathologic changes of Alzheimers disease. While there are extensive studies on cerebrospinal fluids biomarkers (amyloid, tau), the spatial propagation pattern across brain is missing and their interactive mechanisms with neurodegeneration are still unclear. To this end, we aim to analyze the spatiotemporal associations between ATN biomarkers using large-scale neuroimaging data. We first investigate the temporal appearances of amyloid plaques, tau tangles, and neuronal loss by modeling the longitudinal transition trajectories. Second, we propose linear mixed-effects models to quantify the pathological interactions and propagation of ATN biomarkers at each brain region. Our analysis of the current data shows that there exists a temporal latency in the build-up of amyloid to the onset of tau pathology and neurodegeneration. The propagation pattern of amyloid can be characterized by its diffusion along the topological brain network. Our models provide sufficient evidence that the progression of pathological tau and neurodegeneration share a strong regional association, which is different from amyloid.
415 - Qi She , Anqi Wu 2019
Latent dynamics discovery is challenging in extracting complex dynamics from high-dimensional noisy neural data. Many dimensionality reduction methods have been widely adopted to extract low-dimensional, smooth and time-evolving latent trajectories. However, simple state transition structures, linear embedding assumptions, or inflexible inference networks impede the accurate recovery of dynamic portraits. In this paper, we propose a novel latent dynamic model that is capable of capturing nonlinear, non-Markovian, long short-term time-dependent dynamics via recurrent neural networks and tackling complex nonlinear embedding via non-parametric Gaussian process. Due to the complexity and intractability of the model and its inference, we also provide a powerful inference network with bi-directional long short-term memory networks that encode both past and future information into posterior distributions. In the experiment, we show that our model outperforms other state-of-the-art methods in reconstructing insightful latent dynamics from both simulated and experimental neural datasets with either Gaussian or Poisson observations, especially in the low-sample scenario. Our codes and additional materials are available at https://github.com/sheqi/GP-RNN_UAI2019.
Blanking processes belong to the most widely used manufacturing techniques due to their economic efficiency. Their economic viability depends to a large extent on the resulting product quality and the associated customer satisfaction as well as on po ssible downtimes. In particular, the occurrence of increased tool wear reduces the product quality and leads to downtimes, which is why considerable research has been carried out in recent years with regard to wear detection. While processes have widely been monitored based on force and acceleration signals, a new approach is pursued in this paper. Blanked workpieces manufactured by punches with 16 different wear states are photographed and then used as inputs for Deep Convolutional Neural Networks to classify wear states. The results show that wear states can be predicted with surprisingly high accuracy, opening up new possibilities and research opportunities for tool wear monitoring of blanking processes.
The Machine Recognition of Crystallization Outcomes (MARCO) initiative has assembled roughly half a million annotated images of macromolecular crystallization experiments from various sources and setups. Here, state-of-the-art machine learning algori thms are trained and tested on different parts of this data set. We find that more than 94% of the test images can be correctly labeled, irrespective of their experimental origin. Because crystal recognition is key to high-density screening and the systematic analysis of crystallization experiments, this approach opens the door to both industrial and fundamental research applications.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا