ترغب بنشر مسار تعليمي؟ اضغط هنا

Spatially regularized reconstruction of fibre orientation distributions in the presence of isotropic diffusion

212   0   0.0 ( 0 )
 نشر من قبل Quan Zhou
 تاريخ النشر 2014
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

The connectivity and structural integrity of the white matter of the brain is nowadays known to be implicated into a wide range of brain-related disorders. However, it was not before the advent of diffusion Magnetic Resonance Imaging (dMRI) that researches have been able to examine the properties of white matter in vivo. Presently, among a range of various methods of dMRI, high angular resolution diffusion imaging (HARDI) is known to excel in its ability to provide reliable information about the local orientations of neural fasciculi (aka fibre tracts). Moreover, as opposed to the more traditional diffusion tensor imaging (DTI), HARDI is capable of distinguishing the orientations of multiple fibres passing through a given spatial voxel. Unfortunately, the ability of HARDI to discriminate between neural fibres that cross each other at acute angles is always limited, which is the main reason behind the development of numerous post-processing tools, aiming at the improvement of the directional resolution of HARDI. Among such tools is spherical deconvolution (SD). Due to its ill-posed nature, however, SD standardly relies on a number of a priori assumptions which are to render its results unique and stable. In this paper, we propose a different approach to the problem of SD in HARDI, which accounts for the spatial continuity of neural fibres as well as the presence of isotropic diffusion. Subsequently, we demonstrate how the proposed solution can be used to successfully overcome the effect of partial voluming, while preserving the spatial coherency of cerebral diffusion at moderate-to-severe noise levels. In a series of both in silico and in vivo experiments, the performance of the proposed method is compared with that of several available alternatives, with the comparative results clearly supporting the viability and usefulness of our approach.

قيم البحث

اقرأ أيضاً

Magnetic resonance fingerprinting (MRF) provides a unique concept for simultaneous and fast acquisition of multiple quantitative MR parameters. Despite acquisition efficiency, adoption of MRF into the clinics is hindered by its dictionary matching-ba sed reconstruction, which is computationally demanding and lacks scalability. Here, we propose a convolutional neural network-based reconstruction, which enables both accurate and fast reconstruction of parametric maps, and is adaptable based on the needs of spatial regularization and the capacity for the reconstruction. We evaluated the method using MRF T1-FF, an MRF sequence for T1 relaxation time of water (T1H2O) and fat fraction (FF) mapping. We demonstrate the methods performance on a highly heterogeneous dataset consisting of 164 patients with various neuromuscular diseases imaged at thighs and legs. We empirically show the benefit of incorporating spatial regularization during the reconstruction and demonstrate that the method learns meaningful features from MR physics perspective. Further, we investigate the ability of the method to handle highly heterogeneous morphometric variations and its generalization to anatomical regions unseen during training. The obtained results outperform the state-of-the-art in deep learning-based MRF reconstruction. The method achieved normalized root mean squared errors of 0.048 $pm$ 0.011 for T1H2O maps and 0.027 $pm$ 0.004 for FF maps when compared to the dictionary matching in a test set of 50 patients. Coupled with fast MRF sequences, the proposed method has the potential of enabling multiparametric MR imaging in clinically feasible time.
50 - Tingran Gao 2016
Kernel-based non-linear dimensionality reduction methods, such as Local Linear Embedding (LLE) and Laplacian Eigenmaps, rely heavily upon pairwise distances or similarity scores, with which one can construct and study a weighted graph associated with the dataset. When each individual data object carries additional structural details, however, the correspondence relations between these structures provide extra information that can be leveraged for studying the dataset using the graph. Based on this observation, we generalize Diffusion Maps (DM) in manifold learning and introduce the framework of Horizontal Diffusion Maps (HDM). We model a dataset with pairwise structural correspondences as a fibre bundle equipped with a connection. We demonstrate the advantage of incorporating such additional information and study the asymptotic behavior of HDM on general fibre bundles. In a broader context, HDM reveals the sub-Riemannian structure of high-dimensional datasets, and provides a nonparametric learning framework for datasets with structural correspondences.
This paper focuses on estimating probability distributions over the set of 3D rotations ($SO(3)$) using deep neural networks. Learning to regress models to the set of rotations is inherently difficult due to differences in topology between $mathbb{R} ^N$ and $SO(3)$. We overcome this issue by using a neural network to output the parameters for a matrix Fisher distribution since these parameters are homeomorphic to $mathbb{R}^9$. By using a negative log likelihood loss for this distribution we get a loss which is convex with respect to the network outputs. By optimizing this loss we improve state-of-the-art on several challenging applicable datasets, namely Pascal3D+, ModelNet10-$SO(3)$ and UPNA head pose.
The orientation dynamics of small anisotropic tracer particles in turbulent flows is studied using direct numerical simulation (DNS) and results are compared with Lagrangian stochastic models. Generalizing earlier analysis for axisymmetric ellipsoida l particles (Parsa et al. 2012), we measure the orientation statistics and rotation rates of general, triaxial ellipsoidal tracer particles using Lagrangian tracking in DNS of isotropic turbulence. Triaxial ellipsoids that are very long in one direction, very thin in another, and of intermediate size in the third direction exhibit reduced rotation rates that are similar to those of rods in the ellipsoids longest direction, while exhibiting increased rotation rates that are similar to those of axisymmetric discs in the thinnest direction. DNS results differ significantly from the case when the particle orientations are assumed to be statistically independent from the velocity gradient tensor. They are also different from predictions of a Gaussian process for the velocity gradient tensor, which does not provide realistic preferred vorticity-strain-rate tensor alignments. DNS results are also compared with a stochastic model for the velocity gradient tensor based on the recent fluid deformation approximation (RFDA). Unlike the Gaussian model, the stochastic model accurately predicts the reduction in rotation rate in the longest direction of triaxial ellipsoids since this direction aligns with the flows vorticity, with its rotation perpendicular to the vorticity being reduced. For disc-like particles, or in directions perpendicular to the longest direction in triaxial particles, the model predicts {noticeably} smaller rotation rates than those observed in DNS, a behavior that can be understood based on the probability of vorticity orientation with the most contracting strain-rate eigen-direction in the model.
In this paper, we present a novel deep metric learning method to tackle the multi-label image classification problem. In order to better learn the correlations among images features, as well as labels, we attempt to explore a latent space, where imag es and labels are embedded via two unique deep neural networks, respectively. To capture the relationships between image features and labels, we aim to learn a emph{two-way} deep distance metric over the embedding space from two different views, i.e., the distance between one image and its labels is not only smaller than those distances between the image and its labels nearest neighbors, but also smaller than the distances between the labels and other images corresponding to the labels nearest neighbors. Moreover, a reconstruction module for recovering correct labels is incorporated into the whole framework as a regularization term, such that the label embedding space is more representative. Our model can be trained in an end-to-end manner. Experimental results on publicly available image datasets corroborate the efficacy of our method compared with the state-of-the-arts.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا