No Arabic abstract
Functional Connectivity (FC) matrices measure the regional interactions in the brain and have been widely used in neurological brain disease classification. However, a FC matrix is neither a natural image which contains shape and texture information, nor a vector of independent features, which renders the extracting of efficient features from matrices as a challenging problem. A brain network, also named as connectome, could forma a graph structure naturally, the nodes of which are brain regions and the edges are interregional connectivity. Thus, in this study, we proposed novel graph convolutional networks (GCNs) to extract efficient disease-related features from FC matrices. Considering the time-dependent nature of brain activity, we computed dynamic FC matrices with sliding-windows and implemented a graph convolution based LSTM (long short term memory) layer to process dynamic graphs. Moreover, the demographics of patients were also used to guide the classification. However, unlike in conventional methods where personal information, i.e., gender and age were added as extra inputs, we argue that this kind of approach may not actually improve the classification performance, for such personal information given in dataset was usually balanced distributed. In this paper, we proposed to utilize the demographic information as extra outputs and to share parameters among three networks predicting subject status, gender and age, which serve as assistant tasks. We tested the performance of the proposed architecture in ADNI II dataset to classify Alzheimers disease patients from normal controls. The classification accuracy, sensitivity and specificity reach 0.90, 0.92 and 0.89 on ADNI II dataset.
Functional connectivity (FC) between regions of the brain can be assessed by the degree of temporal correlation measured with functional neuroimaging modalities. Based on the fact that these connectivities build a network, graph-based approaches for analyzing the brain connectome have provided insights into the functions of the human brain. The development of graph neural networks (GNNs) capable of learning representation from graph structured data has led to increased interest in learning the graph representation of the brain connectome. Although recent attempts to apply GNN to the FC network have shown promising results, there is still a common limitation that they usually do not incorporate the dynamic characteristics of the FC network which fluctuates over time. In addition, a few studies that have attempted to use dynamic FC as an input for the GNN reported a reduction in performance compared to static FC methods, and did not provide temporal explainability. Here, we propose STAGIN, a method for learning dynamic graph representation of the brain connectome with spatio-temporal attention. Specifically, a temporal sequence of brain graphs is input to the STAGIN to obtain the dynamic graph representation, while novel READOUT functions and the Transformer encoder provide spatial and temporal explainability with attention, respectively. Experiments on the HCP-Rest and the HCP-Task datasets demonstrate exceptional performance of our proposed method. Analysis of the spatio-temporal attention also provide concurrent interpretation with the neuroscientific knowledge, which further validates our method. Code is available at https://github.com/egyptdj/stagin
Measurements of breast density have the potential to improve the efficiency and reduce the cost of screening mammography through personalized screening. Breast density has traditionally been evaluated from the dense area in a mammogram, but volumetric assessment methods, which measure the volumetric fraction of fibro-glandular tissue in the breast, are potentially more consistent and physically sound. The purpose of the present study is to evaluate a method for measuring the volumetric breast density using photon-counting spectral tomosynthesis. The performance of the method was evaluated using phantom measurements and clinical data from a small population (n=18). The precision was determined to 2.4 percentage points (pp) of volumetric breast density. Strong correlations were observed between contralateral (R^2=0.95) and ipsilateral (R^2=0.96) breast-density measurements. The measured breast density was anti-correlated to breast thickness, as expected, and exhibited a skewed distribution in the range [3.7%, 55%] and with a median of 18%. We conclude that the method yields promising results that are consistent with expectations. The relatively high precision of the method may enable novel applications such as treatment monitoring.
Deep learning has gained great success in various classification tasks. Typically, deep learning models learn underlying features directly from data, and no underlying relationship between classes are included. Similarity between classes can influence the performance of classification. In this article, we propose a method that incorporates class similarity knowledge into convolutional neural networks models using a graph convolution layer. We evaluate our method on two benchmark image datasets: MNIST and CIFAR10, and analyze the results on different data and model sizes. Experimental results show that our model can improve classification accuracy, especially when the amount of available data is small.
The pre-training on the graph neural network model can learn the general features of large-scale networks or networks of the same type by self-supervised methods, which allows the model to work even when node labels are missing. However, the existing pre-training methods do not take network evolution into consideration. This paper proposes a pre-training method on dynamic graph neural networks (PT-DGNN), which uses dynamic attributed graph generation tasks to simultaneously learn the structure, semantics, and evolution features of the graph. The method includes two steps: 1) dynamic sub-graph sampling, and 2) pre-training with dynamic attributed graph generation task. Comparative experiments on three realistic dynamic network datasets show that the proposed method achieves the best results on the link prediction fine-tuning task.
As bone and air produce weak signals with conventional MR sequences, segmentation of these tissues particularly difficult in MRI. We propose to integrate patch-based anatomical signatures and an auto-context model into a machine learning framework to iteratively segment MRI into air, bone and soft tissue. The proposed semantic classification random forest (SCRF) method consists of a training stage and a segmentation stage. During training stage, patch-based anatomical features were extracted from registered MRI-CT training images, and the most informative features were identified to train a series of classification forests with auto-context model. During segmentation stage, we extracted selected features from MRI and fed them into the well-trained forests for MRI segmentation. The DSC for air, bone and soft tissue obtained with proposed SCRF were 0.976, 0.819 and 0.932, compared to 0.916, 0.673 and 0.830 with RF, 0.942, 0.791 and 0.917 with U-Net. SCRF also demonstrated superior segmentation performances for sensitivity and specificity over RF and U-Net for all three structure types. The proposed segmentation technique could be a useful tool to segment bone, air and soft tissue, and have the potential to be applied to attenuation correction of PET/MRI system, MRI-only radiation treatment planning and MR-guided focused ultrasound surgery.