ترغب بنشر مسار تعليمي؟ اضغط هنا

Convolutional Neural Networks for Estimation of Myelin Maturation in Infant Brain

62   0   0.0 ( 0 )
 نشر من قبل Akihiko Wada
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Myelination plays an important role in the neurological development of infant brain and MRI can visualize the myelination extension as T1 high and T2 low signal intensity at white matter. We tried to construct a convolutional neural network machine learning model to estimate the myelination. Eight layers CNN architecture was constructed to estimate the subjects age with T1 and T2 weighted image at 5 levels associated with myelin maturation in 119 subjects up to 24 months. CNN model learned with all age dataset revealed a strong correlation between the estimated age and the corrected age and the coefficient of correlation, root mean square error and mean absolute error was 0. 81, 3. 40 and 2. 28. Moreover, the adaptation of ensemble learning models with two datasets 0 to 16 months and 8 to 24 months improved that to 0. 93, 2. 12 and 1. 34. Deep learning can be adaptable to myelination estimation in infant brain.

قيم البحث

اقرأ أيضاً

Skullstripping is defined as the task of segmenting brain tissue from a full head magnetic resonance image~(MRI). It is a critical component in neuroimage processing pipelines. Downstream deformable registration and whole brain segmentation performan ce is highly dependent on accurate skullstripping. Skullstripping is an especially challenging task for infant~(age range 0--18 months) head MRI images due to the significant size and shape variability of the head and the brain in that age range. Infant brain tissue development also changes the $T_1$-weighted image contrast over time, making consistent skullstripping a difficult task. Existing tools for adult brain MRI skullstripping are ill equipped to handle these variations and a specialized infant MRI skullstripping algorithm is necessary. In this paper, we describe a supervised skullstripping algorithm that utilizes three trained fully convolutional neural networks~(CNN), each of which segments 2D $T_1$-weighted slices in axial, coronal, and sagittal views respectively. The three probabilistic segmentations in the three views are linearly fused and thresholded to produce a final brain mask. We compared our method to existing adult and infant skullstripping algorithms and showed significant improvement based on Dice overlap metric~(average Dice of 0.97) with a manually labeled ground truth data set. Label fusion experiments on multiple, unlabeled data sets show that our method is consistent and has fewer failure modes. In addition, our method is computationally very fast with a run time of 30 seconds per image on NVidia P40/P100/Quadro 4000 GPUs.
Understanding the dynamics of brain tumor progression is essential for optimal treatment planning. Cast in a mathematical formulation, it is typically viewed as evaluation of a system of partial differential equations, wherein the physiological proce sses that govern the growth of the tumor are considered. To personalize the model, i.e. find a relevant set of parameters, with respect to the tumor dynamics of a particular patient, the model is informed from empirical data, e.g., medical images obtained from diagnostic modalities, such as magnetic-resonance imaging. Existing model-observation coupling schemes require a large number of forward integrations of the biophysical model and rely on simplifying assumption on the functional form, linking the output of the model with the image information. In this work, we propose a learning-based technique for the estimation of tumor growth model parameters from medical scans. The technique allows for explicit evaluation of the posterior distribution of the parameters by sequentially training a mixture-density network, relaxing the constraint on the functional form and reducing the number of samples necessary to propagate through the forward model for the estimation. We test the method on synthetic and real scans of rats injected with brain tumors to calibrate the model and to predict tumor progression.
Deficient myelination of the brain is associated with neurodevelopmental delays, particularly in high-risk infants, such as those born small in relation to their gestational age (SGA). New methods are needed to further study this condition. Here, we employ Color Spatial Light Interference Microscopy (cSLIM), which uses a brightfield objective and RGB camera to generate pathlength-maps with nanoscale sensitivity in conjunction with a regular brightfield image. Using tissue sections stained with Luxol Fast Blue, the myelin structures were segmented from a brightfield image. Using a binary mask, those portions were quantitatively analyzed in the corresponding phase maps. We first used the CLARITY method to remove tissue lipids and validate the sensitivity of cSLIM to lipid content. We then applied cSLIM to brain histology slices. These specimens are from a previous MRI study, which demonstrated that appropriate for gestational age (AGA) piglets have increased internal capsule myelination (ICM) compared to small for gestational age (SGA) piglets and that a hydrolyzed fat diet improved ICM in both. The identity of samples was blinded until after statistical analyses.
We present a dual-stage neural network architecture for analyzing fine shape details from microscopy recordings in 3D. The system, tested on red blood cells, uses training data from both healthy donors and patients with a congenital blood disease. Ch aracteristic shape features are revealed from the spherical harmonics spectrum of each cell and are automatically processed to create a reproducible and unbiased shape recognition and classification for diagnostic and theragnostic use.
We present cortical surface parcellation using spherical deep convolutional neural networks. Traditional multi-atlas cortical surface parcellation requires inter-subject surface registration using geometric features with high processing time on a sin gle subject (2-3 hours). Moreover, even optimal surface registration does not necessarily produce optimal cortical parcellation as parcel boundaries are not fully matched to the geometric features. In this context, a choice of training features is important for accurate cortical parcellation. To utilize the networks efficiently, we propose cortical parcellation-specific input data from an irregular and complicated structure of cortical surfaces. To this end, we align ground-truth cortical parcel boundaries and use their resulting deformation fields to generate new pairs of deformed geometric features and parcellation maps. To extend the capability of the networks, we then smoothly morph cortical geometric features and parcellation maps using the intermediate deformation fields. We validate our method on 427 adult brains for 49 labels. The experimental results show that our method out-performs traditional multi-atlas and naive spherical U-Net approaches, while achieving full cortical parcellation in less than a minute.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا