No Arabic abstract
As bone and air produce weak signals with conventional MR sequences, segmentation of these tissues particularly difficult in MRI. We propose to integrate patch-based anatomical signatures and an auto-context model into a machine learning framework to iteratively segment MRI into air, bone and soft tissue. The proposed semantic classification random forest (SCRF) method consists of a training stage and a segmentation stage. During training stage, patch-based anatomical features were extracted from registered MRI-CT training images, and the most informative features were identified to train a series of classification forests with auto-context model. During segmentation stage, we extracted selected features from MRI and fed them into the well-trained forests for MRI segmentation. The DSC for air, bone and soft tissue obtained with proposed SCRF were 0.976, 0.819 and 0.932, compared to 0.916, 0.673 and 0.830 with RF, 0.942, 0.791 and 0.917 with U-Net. SCRF also demonstrated superior segmentation performances for sensitivity and specificity over RF and U-Net for all three structure types. The proposed segmentation technique could be a useful tool to segment bone, air and soft tissue, and have the potential to be applied to attenuation correction of PET/MRI system, MRI-only radiation treatment planning and MR-guided focused ultrasound surgery.
As deep learning is showing unprecedented success in medical image analysis tasks, the lack of sufficient medical data is emerging as a critical problem. While recent attempts to solve the limited data problem using Generative Adversarial Networks (GAN) have been successful in generating realistic images with diversity, most of them are based on image-to-image translation and thus require extensive datasets from different domains. Here, we propose a novel model that can successfully generate 3D brain MRI data from random vectors by learning the data distribution. Our 3D GAN model solves both image blurriness and mode collapse problems by leveraging alpha-GAN that combines the advantages of Variational Auto-Encoder (VAE) and GAN with an additional code discriminator network. We also use the Wasserstein GAN with Gradient Penalty (WGAN-GP) loss to lower the training instability. To demonstrate the effectiveness of our model, we generate new images of normal brain MRI and show that our model outperforms baseline models in both quantitative and qualitative measurements. We also train the model to synthesize brain disorder MRI data to demonstrate the wide applicability of our model. Our results suggest that the proposed model can successfully generate various types and modalities of 3D whole brain volumes from a small set of training data.
Purpose: This study demonstrated an MR signal multitask learning method for 3D simultaneous segmentation and relaxometry of human brain tissues. Materials and Methods: A 3D inversion-prepared balanced steady-state free precession sequence was used for acquiring in vivo multi-contrast brain images. The deep neural network contained 3 residual blocks, and each block had 8 fully connected layers with sigmoid activation, layer norm, and 256 neurons in each layer. Online synthesized MR signal evolutions and labels were used to train the neural network batch-by-batch. Empirically defined ranges of T1 and T2 values for the normal gray matter, white matter and cerebrospinal fluid (CSF) were used as the prior knowledge. MRI brain experiments were performed on 3 healthy volunteers as well as animal (N=6) and prostate patient (N=1) experiments. Results: In animal validation experiment, the differences/errors (mean difference $pm$ standard deviation of difference) between the T1 and T2 values estimated from the proposed method and the ground truth were 113 $pm$ 486 and 154 $pm$ 512 ms for T1, and 5 $pm$ 33 and 7 $pm$ 41 ms for T2, respectively. In healthy volunteer experiments (N=3), whole brain segmentation and relaxometry were finished within ~5 seconds. The estimated apparent T1 and T2 maps were in accordance with known brain anatomy, and not affected by coil sensitivity variation. Gray matter, white matter, and CSF were successfully segmented. The deep neural network can also generate synthetic T1 and T2 weighted images. Conclusion: The proposed multitask learning method can directly generate brain apparent T1 and T2 maps, as well as synthetic T1 and T2 weighted images, in conjunction with segmentation of gray matter, white matter and CSF.
Magnetic resonance image (MRI) in high spatial resolution provides detailed anatomical information and is often necessary for accurate quantitative analysis. However, high spatial resolution typically comes at the expense of longer scan time, less spatial coverage, and lower signal to noise ratio (SNR). Single Image Super-Resolution (SISR), a technique aimed to restore high-resolution (HR) details from one single low-resolution (LR) input image, has been improved dramatically by recent breakthroughs in deep learning. In this paper, we introduce a new neural network architecture, 3D Densely Connected Super-Resolution Networks (DCSRN) to restore HR features of structural brain MR images. Through experiments on a dataset with 1,113 subjects, we demonstrate that our network outperforms bicubic interpolation as well as other deep learning methods in restoring 4x resolution-reduced images.
During the first years of life, the human brain undergoes dynamic spatially-heterogeneous changes, involving differentiation of neuronal types, dendritic arborization, axonal ingrowth, outgrowth and retraction, synaptogenesis, and myelination. To better quantify these changes, this article presents a method for probing tissue microarchitecture by characterizing water diffusion in a spectrum of length scales, factoring out the effects of intra-voxel orientation heterogeneity. Our method is based on the spherical means of the diffusion signal, computed over gradient directions for a fixed set of diffusion weightings (i.e., b-values). We decompose the spherical mean series at each voxel into a spherical mean spectrum (SMS), which essentially encodes the fractions of spin packets undergoing fine- to coarse-scale diffusion processes, characterizing hindered and restricted diffusion stemming respectively from extra- and intra-neurite water compartments. From the SMS, multiple orientation distribution invariant indices can be computed, allowing for example the quantification of neurite density, microscopic fractional anisotropy ($mu$FA), per-axon axial/radial diffusivity, and free/restricted isotropic diffusivity. We show maps of these indices for baby brains, demonstrating that microscopic tissue features can be extracted from the developing brain for greater sensitivity and specificity to development related changes. Also, we demonstrate that our method, called spherical mean spectrum imaging (SMSI), is fast, accurate, and can overcome the biases associated with other state-of-the-art microstructure models.
In this paper, we propose a novel learning based method for automated segmentation of brain tumor in multimodal MRI images, which incorporates two sets of machine -learned and hand crafted features. Fully convolutional networks (FCN) forms the machine learned features and texton based features are considered as hand-crafted features. Random forest (RF) is used to classify the MRI image voxels into normal brain tissues and different parts of tumors, i.e. edema, necrosis and enhancing tumor. The method was evaluated on BRATS 2017 challenge dataset. The results show that the proposed method provides promising segmentations. The mean Dice overlap measure for automatic brain tumor segmentation against ground truth is 0.86, 0.78 and 0.66 for whole tumor, core and enhancing tumor, respectively.