ترغب بنشر مسار تعليمي؟ اضغط هنا

PiaNet: A pyramid input augmented convolutional neural network for GGO detection in 3D lung CT scans

124   0   0.0 ( 0 )
 نشر من قبل Weihua Liu
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

This paper proposes a new convolutional neural network with multiscale processing for detecting ground-glass opacity (GGO) nodules in 3D computed tomography (CT) images, which is referred to as PiaNet for short. PiaNet consists of a feature-extraction module and a prediction module. The former module is constructed by introducing pyramid multiscale source connections into a contracting-expanding structure. The latter module includes a bounding-box regressor and a classifier that are employed to simultaneously recognize GGO nodules and estimate bounding boxes at multiple scales. To train the proposed PiaNet, a two-stage transfer learning strategy is developed. In the first stage, the feature-extraction module is embedded into a classifier network that is trained on a large data set of GGO and non-GGO patches, which are generated by performing data augmentation from a small number of annotated CT scans. In the second stage, the pretrained feature-extraction module is loaded into PiaNet, and then PiaNet is fine-tuned using the annotated CT scans. We evaluate the proposed PiaNet on the LIDC-IDRI data set. The experimental results demonstrate that our method outperforms state-of-the-art counterparts, including the Subsolid CAD and Aidence systems and S4ND and GA-SSD methods. PiaNet achieves a sensitivity of 91.75% with only one false positive per scan



قيم البحث

اقرأ أيضاً

Segmentation of mandibles in CT scans during virtual surgical planning is crucial for 3D surgical planning in order to obtain a detailed surface representation of the patients bone. Automatic segmentation of mandibles in CT scans is a challenging tas k due to large variation in their shape and size between individuals. In order to address this challenge we propose a convolutional neural network approach for mandible segmentation in CT scans by considering the continuum of anatomical structures through different planes. The proposed convolutional neural network adopts the architecture of the U-Net and then combines the resulting 2D segmentations from three different planes into a 3D segmentation. We implement such a segmentation approach on 11 neck CT scans and then evaluate the performance. We achieve an average dice coefficient of $ 0.89 $ on two testing mandible segmentation. Experimental results show that our proposed approach for mandible segmentation in CT scans exhibits high accuracy.
Computed Tomography (CT) imaging technique is widely used in geological exploration, medical diagnosis and other fields. In practice, however, the resolution of CT image is usually limited by scanning devices and great expense. Super resolution (SR) methods based on deep learning have achieved surprising performance in two-dimensional (2D) images. Unfortunately, there are few effective SR algorithms for three-dimensional (3D) images. In this paper, we proposed a novel network named as three-dimensional super resolution convolutional neural network (3DSRCNN) to realize voxel super resolution for CT images. To solve the practical problems in training process such as slow convergence of network training, insufficient memory, etc., we utilized adjustable learning rate, residual-learning, gradient clipping, momentum stochastic gradient descent (SGD) strategies to optimize training procedure. In addition, we have explored the empirical guidelines to set appropriate number of layers of network and how to use residual learning strategy. Additionally, previous learning-based algorithms need to separately train for different scale factors for reconstruction, yet our single model can complete the multi-scale SR. At last, our method has better performance in terms of PSNR, SSIM and efficiency compared with conventional methods.
Accurate pulmonary nodule detection is a crucial step in lung cancer screening. Computer-aided detection (CAD) systems are not routinely used by radiologists for pulmonary nodule detection in clinical practice despite their potential benefits. Maximu m intensity projection (MIP) images improve the detection of pulmonary nodules in radiological evaluation with computed tomography (CT) scans. Inspired by the clinical methodology of radiologists, we aim to explore the feasibility of applying MIP images to improve the effectiveness of automatic lung nodule detection using convolutional neural networks (CNNs). We propose a CNN-based approach that takes MIP images of different slab thicknesses (5 mm, 10 mm, 15 mm) and 1 mm axial section slices as input. Such an approach augments the two-dimensional (2-D) CT slice images with more representative spatial information that helps discriminate nodules from vessels through their morphologies. Our proposed method achieves sensitivity of 92.67% with 1 false positive per scan and sensitivity of 94.19% with 2 false positives per scan for lung nodule detection on 888 scans in the LIDC-IDRI dataset. The use of thick MIP images helps the detection of small pulmonary nodules (3 mm-10 mm) and results in fewer false positives. Experimental results show that utilizing MIP images can increase the sensitivity and lower the number of false positives, which demonstrates the effectiveness and significance of the proposed MIP-based CNNs framework for automatic pulmonary nodule detection in CT scans. The proposed method also shows the potential that CNNs could gain benefits for nodule detection by combining the clinical procedure.
Automatic abnormality detection in abdominal CT scans can help doctors improve the accuracy and efficiency in diagnosis. In this paper we aim at detecting pancreatic ductal adenocarcinoma (PDAC), the most common pancreatic cancer. Taking the fact tha t the existence of tumor can affect both the shape and the texture of pancreas, we design a system to extract the shape and texture feature at the same time for detecting PDAC. In this paper we propose a two-stage method for this 3D classification task. First, we segment the pancreas into a binary mask. Second, a FusionNet is proposed to take both the binary mask and CT image as input and perform a binary classification. The optimal architecture of the FusionNet is obtained by searching a pre-defined functional space. We show that the classification results using either shape or texture information are complementary, and by fusing them with the optimized architecture, the performance improves by a large margin. Our method achieves a specificity of 97% and a sensitivity of 92% on 200 normal scans and 136 scans with PDAC.
Response evaluation criteria in solid tumors (RECIST) is the standard measurement for tumor extent to evaluate treatment responses in cancer patients. As such, RECIST annotations must be accurate. However, RECIST annotations manually labeled by radio logists require professional knowledge and are time-consuming, subjective, and prone to inconsistency among different observers. To alleviate these problems, we propose a cascaded convolutional neural network based method to semi-automatically label RECIST annotations and drastically reduce annotation time. The proposed method consists of two stages: lesion region normalization and RECIST estimation. We employ the spatial transformer network (STN) for lesion region normalization, where a localization network is designed to predict the lesion region and the transformation parameters with a multi-task learning strategy. For RECIST estimation, we adapt the stacked hourglass network (SHN), introducing a relationship constraint loss to improve the estimation precision. STN and SHN can both be learned in an end-to-end fashion. We train our system on the DeepLesion dataset, obtaining a consensus model trained on RECIST annotations performed by multiple radiologists over a multi-year period. Importantly, when judged against the inter-reader variability of two additional radiologist raters, our system performs more stably and with less variability, suggesting that RECIST annotations can be reliably obtained with reduced labor and time.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا