ترغب بنشر مسار تعليمي؟ اضغط هنا

Y-net: 3D intracranial artery segmentation using a convolutional autoencoder

84   0   0.0 ( 0 )
 نشر من قبل Li Chen
 تاريخ النشر 2017
والبحث باللغة English




اسأل ChatGPT حول البحث

Automated segmentation of intracranial arteries on magnetic resonance angiography (MRA) allows for quantification of cerebrovascular features, which provides tools for understanding aging and pathophysiological adaptations of the cerebrovascular system. Using a convolutional autoencoder (CAE) for segmentation is promising as it takes advantage of the autoencoder structure in effective noise reduction and feature extraction by representing high dimensional information with low dimensional latent variables. In this report, an optimized CAE model (Y-net) was trained to learn a 3D segmentation model of intracranial arteries from 49 cases of MRA data. The trained model was shown to perform better than the three traditional segmentation methods in both binary classification and visual evaluation.

قيم البحث

اقرأ أيضاً

Coronary angiography is an indispensable assistive technique for cardiac interventional surgery. Segmentation and extraction of blood vessels from coronary angiography videos are very essential prerequisites for physicians to locate, assess and diagn ose the plaques and stenosis in blood vessels. This article proposes a new video segmentation framework that can extract the clearest and most comprehensive coronary angiography images from a video sequence, thereby helping physicians to better observe the condition of blood vessels. This framework combines a 3D convolutional layer to extract spatial--temporal information from a video sequence and a 2D CE--Net to accomplish the segmentation task of an image sequence. The input is a few continuous frames of angiographic video, and the output is a mask of segmentation result. From the results of segmentation and extraction, we can get good segmentation results despite the poor quality of coronary angiography video sequences.
Vessel stenosis is a major risk factor in cardiovascular diseases (CVD). To analyze the degree of vessel stenosis for supporting the treatment management, extraction of coronary artery area from Computed Tomographic Angiography (CTA) is regarded as a key procedure. However, manual segmentation by cardiologists may be a time-consuming task, and present a significant inter-observer variation. Although various computer-aided approaches have been developed to support segmentation of coronary arteries in CTA, the results remain unreliable due to complex attenuation appearance of plaques, which are the cause of the stenosis. To overcome the difficulties caused by attenuation ambiguity, in this paper, a 3D multi-channel U-Net architecture is proposed for fully automatic 3D coronary artery reconstruction from CTA. Other than using the original CTA image, the main idea of the proposed approach is to incorporate the vesselness map into the input of the U-Net, which serves as the reinforcing information to highlight the tubular structure of coronary arteries. The experimental results show that the proposed approach could achieve a Dice Similarity Coefficient (DSC) of 0.8 in comparison to around 0.6 attained by previous CNN approaches.
54 - Z. Ma , L. Song , X. Feng 2019
Intracranial aneurysm (IA) is a life-threatening blood spot in humans brain if it ruptures and causes cerebral hemorrhage. It is challenging to detect whether an IA has ruptured from medical images. In this paper, we propose a novel graph based neura l network named GraphNet to detect IA rupture from 3D surface data. GraphNet is based on graph convolution network (GCN) and is designed for graph-level classification and node-level segmentation. The network uses GCN blocks to extract surface local features and pools to global features. 1250 patient data including 385 ruptured and 865 unruptured IAs were collected from clinic for experiments. The performance on randomly selected 234 test patient data was reported. The experiment with the proposed GraphNet achieved accuracy of 0.82, area-under-curve (AUC) of receiver operating characteristic (ROC) curve 0.82 in the classification task, significantly outperforming the baseline approach without using graph based networks. The segmentation output of the model achieved mean graph-node-based dice coefficient (DSC) score 0.88.
KiTs19 challenge paves the way to haste the improvement of solid kidney tumor semantic segmentation methodologies. Accurate segmentation of kidney tumor in computer tomography (CT) images is a challenging task due to the non-uniform motion, similar a ppearance and various shape. Inspired by this fact, in this manuscript, we present a novel kidney tumor segmentation method using deep learning network termed as Hyper vision Net model. All the existing U-net models are using a modified version of U-net to segment the kidney tumor region. In the proposed architecture, we introduced supervision layers in the decoder part, and it refines even minimal regions in the output. A dataset consists of real arterial phase abdominal CT scans of 300 patients, including 45964 images has been provided from KiTs19 for training and validation of the proposed model. Compared with the state-of-the-art segmentation methods, the results demonstrate the superiority of our approach on training dice value score of 0.9552 and 0.9633 in tumor region and kidney region, respectively.
Automatic segmentation of cardiac magnetic resonance imaging (MRI) facilitates efficient and accurate volume measurement in clinical applications. However, due to anisotropic resolution and ambiguous border (e.g., right ventricular endocardium), exis ting methods suffer from the degradation of accuracy and robustness in 3D cardiac MRI video segmentation. In this paper, we propose a novel Deformable U-Net (DeU-Net) to fully exploit spatio-temporal information from 3D cardiac MRI video, including a Temporal Deformable Aggregation Module (TDAM) and a Deformable Global Position Attention (DGPA) network. First, the TDAM takes a cardiac MRI video clip as input with temporal information extracted by an offset prediction network. Then we fuse extracted temporal information via a temporal aggregation deformable convolution to produce fused feature maps. Furthermore, to aggregate meaningful features, we devise the DGPA network by employing deformable attention U-Net, which can encode a wider range of multi-dimensional contextual information into global and local features. Experimental results show that our DeU-Net achieves the state-of-the-art performance on commonly used evaluation metrics, especially for cardiac marginal information (ASSD and HD).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا