Do you want to publish a course? Click here

DeepBundle: Fiber Bundle Parcellation with Graph Convolution Neural Networks

75   0   0.0 ( 0 )
 Added by Feihong Liu
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

Parcellation of whole-brain tractography streamlines is an important step for tract-based analysis of brain white matter microstructure. Existing fiber parcellation approaches rely on accurate registration between an atlas and the tractograms of an individual, however, due to large individual differences, accurate registration is hard to guarantee in practice. To resolve this issue, we propose a novel deep learning method, called DeepBundle, for registration-free fiber parcellation. Our method utilizes graph convolution neural networks (GCNNs) to predict the parcellation label of each fiber tract. GCNNs are capable of extracting the geometric features of each fiber tract and harnessing the resulting features for accurate fiber parcellation and ultimately avoiding the use of atlases and any registration method. We evaluate DeepBundle using data from the Human Connectome Project. Experimental results demonstrate the advantages of DeepBundle and suggest that the geometric features extracted from each fiber tract can be used to effectively parcellate the fiber tracts.



rate research

Read More

Graph Neural Networks have emerged as a useful tool to learn on the data by applying additional constraints based on the graph structure. These graphs are often created with assumed intrinsic relations between the entities. In recent years, there have been tremendous improvements in the architecture design, pushing the performance up in various prediction tasks. In general, these neural architectures combine layer depth and node feature aggregation steps. This makes it challenging to analyze the importance of features at various hops and the expressiveness of the neural network layers. As different graph datasets show varying levels of homophily and heterophily in features and class label distribution, it becomes essential to understand which features are important for the prediction tasks without any prior information. In this work, we decouple the node feature aggregation step and depth of graph neural network and introduce several key design strategies for graph neural networks. More specifically, we propose to use softmax as a regularizer and Soft-Selector of features aggregated from neighbors at different hop distances; and Hop-Normalization over GNN layers. Combining these techniques, we present a simple and shallow model, Feature Selection Graph Neural Network (FSGNN), and show empirically that the proposed model outperforms other state of the art GNN models and achieves up to 64% improvements in accuracy on node classification tasks. Moreover, analyzing the learned soft-selection parameters of the model provides a simple way to study the importance of features in the prediction tasks. Finally, we demonstrate with experiments that the model is scalable for large graphs with millions of nodes and billions of edges.
Hermite-Gaussian (HG) laser modes are a complete set of solutions to the free-space paraxial wave equation in Cartesian coordinates and represent a close approximation to physically-realizable laser cavity modes. Additionally, HG modes can be mode-multiplexed to significantly increase the information capacity of optical communication systems due to their orthogonality. Since, both cavity tuning and optical communication applications benefit from a machine vision determination of HG modes, convolution neural networks were implemented to detect the lowest twenty-one unique HG modes with an accuracy greater than 99%. As the effectiveness of a CNN is dependent on the diversity of its training data, extensive simulated and experimental datasets were created for training, validation and testing.
As large-scale graphs become increasingly more prevalent, it poses significant computational challenges to process, extract and analyze large graph data. Graph coarsening is one popular technique to reduce the size of a graph while maintaining essential properties. Despite rich graph coarsening literature, there is only limited exploration of data-driven methods in the field. In this work, we leverage the recent progress of deep learning on graphs for graph coarsening. We first propose a framework for measuring the quality of coarsening algorithm and show that depending on the goal, we need to carefully choose the Laplace operator on the coarse graph and associated projection/lift operators. Motivated by the observation that the current choice of edge weight for the coarse graph may be sub-optimal, we parametrize the weight assignment map with graph neural networks and train it to improve the coarsening quality in an unsupervised way. Through extensive experiments on both synthetic and real networks, we demonstrate that our method significantly improves common graph coarsening methods under various metrics, reduction ratios, graph sizes, and graph types. It generalizes to graphs of larger size ($25times$ of training graphs), is adaptive to different losses (differentiable and non-differentiable), and scales to much larger graphs than previous work.
Motion-robust 2D Radial Turbo Spin Echo (RADTSE) pulse sequence can provide a high-resolution composite image, T2-weighted images at multiple echo times (TEs), and a quantitative T2 map, all from a single k-space acquisition. In this work, we use a deep-learning convolutional neural network (CNN) for the segmentation of liver in abdominal RADTSE images. A modified UNET architecture with generalized dice loss objective function was implemented. Three 2D CNNs were trained, one for each image type obtained from the RADTSE sequence. On evaluating the performance of the CNNs on the validation set, we found that CNNs trained on TE images or the T2 maps had higher average dice scores than the composite images. This, in turn, implies that the information regarding T2 variation in tissues aids in improving the segmentation performance.
98 - Yitong Li , Zhengbo Zhu , Ze Li 2021
The scattering of multispectral incoherent light is a common and unfavorable signal scrambling in natural scenes. However, the blurred light spot due to scattering still holds lots of information remaining to be explored. Former methods failed to recover the polarized hyperspectral information from scattered incoherent light or relied on additional dispersion elements. Here we put forward the transmission matrix (TM) approach for extended objects under incoherent illumination by speculating the unknown TM through experimentally calibrated or digitally emulated ways. Employing a fiber bundle as a powerful imaging and dispersion element, we recover the spatial information in 252 polarized-spectral channels from a single speckle, thus achieving single-shot, high-resolution, broadband hyperspectral imaging for two polarization states with the cheap, compact, fiber-bundle-only system. Based on the scattering principle itself, our method not only greatly improves the robustness of the TM approach to retrieve the input spectral information, but also reveals the feasibility to explore the polarized spatio-spectral information from blurry speckles only with the help of simple optical setups.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا