No Arabic abstract
Confocal histology provides an opportunity to establish intra-voxel fiber orientation distributions that can be used to quantitatively assess the biological relevance of diffusion weighted MRI models, e.g., constrained spherical deconvolution (CSD). Here, we apply deep learning to investigate the potential of single shell diffusion weighted MRI to explain histologically observed fiber orientation distributions (FOD) and compare the derived deep learning model with a leading CSD approach. This study (1) demonstrates that there exists additional information in the diffusion signal that is not currently exploited by CSD, and (2) provides an illustrative data-driven model that makes use of this information.
Diffusion-weighted magnetic resonance imaging (DW-MRI) is the only non-invasive approach for estimation of intra-voxel tissue microarchitecture and reconstruction of in vivo neural pathways for the human brain. With improvement in accelerated MRI acquisition technologies, DW-MRI protocols that make use of multiple levels of diffusion sensitization have gained popularity. A well-known advanced method for reconstruction of white matter microstructure that uses multi-shell data is multi-tissue constrained spherical deconvolution (MT-CSD). MT-CSD substantially improves the resolution of intra-voxel structure over the traditional single shell version, constrained spherical deconvolution (CSD). Herein, we explore the possibility of using deep learning on single shell data (using the b=1000 s/mm2 from the Human Connectome Project (HCP)) to estimate the information content captured by 8th order MT-CSD using the full three shell data (b=1000, 2000, and 3000 s/mm2 from HCP). Briefly, we examine two network architectures: 1.) Sequential network of fully connected dense layers with a residual block in the middle (ResDNN), 2.) Patch based convolutional neural network with a residual block (ResCNN). For both networks an additional output block for estimation of voxel fraction was used with a modified loss function. Each approach was compared against the baseline of using MT-CSD on all data on 15 subjects from the HCP divided into 5 training, 2 validation, and 8 testing subjects with a total of 6.7 million voxels. The fiber orientation distribution function (fODF) can be recovered with high correlation (0.77 vs 0.74 and 0.65) as compared to the ground truth of MT-CST, which was derived from the multi-shell DW-MRI acquisitions. Source code and models have been made publicly available.
Purpose: To propose a deep learning-based reconstruction framework for ultrafast and robust diffusion tensor imaging and fiber tractography. Methods: We propose SuperDTI to learn the nonlinear relationship between diffusion-weighted images (DWIs) and the corresponding tensor-derived quantitative maps as well as the fiber tractography. Super DTI bypasses the tensor fitting procedure, which is well known to be highly susceptible to noise and motion in DWIs. The network is trained and tested using datasets from Human Connectome Project and patients with ischemic stroke. SuperDTI is compared against the state-of-the-art methods for diffusion map reconstruction and fiber tracking. Results: Using training and testing data both from the same protocol and scanner, SuperDTI is shown to generate fractional anisotropy and mean diffusivity maps, as well as fiber tractography, from as few as six raw DWIs. The method achieves a quantification error of less than 5% in all regions of interest in white matter and gray matter structures. We also demonstrate that the trained neural network is robust to noise and motion in the testing data, and the network trained using healthy volunteer data can be directly applied to stroke patient data without compromising the lesion detectability. Conclusion: This paper demonstrates the feasibility of superfast diffusion tensor imaging and fiber tractography using deep learning with as few as six DWIs directly, bypassing tensor fitting. Such a significant reduction in scan time may allow the inclusion of DTI into the clinical routine for many potential applications.
Traditional neuroimage analysis pipelines involve computationally intensive, time-consuming optimization steps, and thus, do not scale well to large cohort studies with thousands or tens of thousands of individuals. In this work we propose a fast and accurate deep learning based neuroimaging pipeline for the automated processing of structural human brain MRI scans, replicating FreeSurfers anatomical segmentation including surface reconstruction and cortical parcellation. To this end, we introduce an advanced deep learning architecture capable of whole brain segmentation into 95 classes. The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both cortical and sub-cortical structures. Further, we perform fast cortical surface reconstruction and thickness analysis by introducing a spectral spherical embedding and by directly mapping the cortical labels from the image to the surface. This approach provides a full FreeSurfer alternative for volumetric analysis (in under 1 minute) and surface-based thickness analysis (within only around 1h runtime). For sustainability of this approach we perform extensive validation: we assert high segmentation accuracy on several unseen datasets, measure generalizability and demonstrate increased test-retest reliability, and high sensitivity to group differences in dementia.
High quality imaging usually requires bulky and expensive lenses to compensate geometric and chromatic aberrations. This poses high constraints on the optical hash or low cost applications. Although one can utilize algorithmic reconstruction to remove the artifacts of low-end lenses, the degeneration from optical aberrations is spatially varying and the computation has to trade off efficiency for performance. For example, we need to conduct patch-wise optimization or train a large set of local deep neural networks to achieve high reconstruction performance across the whole image. In this paper, we propose a PSF aware plug-and-play deep network, which takes the aberrant image and PSF map as input and produces the latent high quality version via incorporating lens-specific deep priors, thus leading to a universal and flexible optical aberration correction method. Specifically, we pre-train a base model from a set of diverse lenses and then adapt it to a given lens by quickly refining the parameters, which largely alleviates the time and memory consumption of model learning. The approach is of high efficiency in both training and testing stages. Extensive results verify the promising applications of our proposed approach for compact low-end cameras.
A clock is, from an information-theoretic perspective, a system that emits information about time. One may therefore ask whether the theory of information imposes any constraints on the maximum precision of clocks. Here we show a quantum-over-classical advantage for clocks or, more precisely, the task of generating information about what time it is. The argument is based on information-theoretic considerations: we analyse how the accuracy of a clock scales with its size, measured in terms of the number of bits that could be stored in it. We find that a quantum clock can achieve a quadratically improved accuracy compared to a purely classical one of the same size.