Multimodal Representations Learning and Adversarial Hypergraph Fusion for Early Alzheimers Disease Prediction


Abstract in English

Multimodal neuroimage can provide complementary information about the dementia, but small size of complete multimodal data limits the ability in representation learning. Moreover, the data distribution inconsistency from different modalities may lead to ineffective fusion, which fails to sufficiently explore the intra-modal and inter-modal interactions and compromises the disease diagnosis performance. To solve these problems, we proposed a novel multimodal representation learning and adversarial hypergraph fusion (MRL-AHF) framework for Alzheimers disease diagnosis using complete trimodal images. First, adversarial strategy and pre-trained model are incorporated into the MRL to extract latent representations from multimodal data. Then two hypergraphs are constructed from the latent representations and the adversarial network based on graph convolution is employed to narrow the distribution difference of hyperedge features. Finally, the hyperedge-invariant features are fused for disease prediction by hyperedge convolution. Experiments on the public Alzheimers Disease Neuroimaging Initiative(ADNI) database demonstrate that our model achieves superior performance on Alzheimers disease detection compared with other related models and provides a possible way to understand the underlying mechanisms of disorders progression by analyzing the abnormal brain connections.

Download