Do you want to publish a course? Click here

Detection and Attention: Diagnosing Pulmonary Lung Cancer from CT by Imitating Physicians

110   0   0.0 ( 0 )
 Added by Kungang Li
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

This paper proposes a novel and efficient method to build a Computer-Aided Diagnoses (CAD) system for lung nodule detection based on Computed Tomography (CT). This task was treated as an Object Detection on Video (VID) problem by imitating how a radiologist reads CT scans. A lung nodule detector was trained to automatically learn nodule features from still images to detect lung nodule candidates with both high recall and accuracy. Unlike previous work which used 3-dimensional information around the nodule to reduce false positives, we propose two simple but efficient methods, Multi-slice propagation (MSP) and Motionless-guide suppression (MLGS), which analyze sequence information of CT scans to reduce false negatives and suppress false positives. We evaluated our method in open-source LUNA16 dataset which contains 888 CT scans, and obtained state-of-the-art result (Free-Response Receiver Operating Characteristic score of 0.892) with detection speed (end to end within 20 seconds per patient on a single NVidia GTX 1080) much higher than existing methods.



rate research

Read More

Early detection of lung cancer is essential in reducing mortality. Recent studies have demonstrated the clinical utility of low-dose computed tomography (CT) to detect lung cancer among individuals selected based on very limited clinical information. However, this strategy yields high false positive rates, which can lead to unnecessary and potentially harmful procedures. To address such challenges, we established a pipeline that co-learns from detailed clinical demographics and 3D CT images. Toward this end, we leveraged data from the Consortium for Molecular and Cellular Characterization of Screen-Detected Lesions (MCL), which focuses on early detection of lung cancer. A 3D attention-based deep convolutional neural net (DCNN) is proposed to identify lung cancer from the chest CT scan without prior anatomical location of the suspicious nodule. To improve upon the non-invasive discrimination between benign and malignant, we applied a random forest classifier to a dataset integrating clinical information to imaging data. The results show that the AUC obtained from clinical demographics alone was 0.635 while the attention network alone reached an accuracy of 0.687. In contrast when applying our proposed pipeline integrating clinical and imaging variables, we reached an AUC of 0.787 on the testing dataset. The proposed network both efficiently captures anatomical information for classification and also generates attention maps that explain the features that drive performance.
The analysis of multi-modality positron emission tomography and computed tomography (PET-CT) images for computer aided diagnosis applications requires combining the sensitivity of PET to detect abnormal regions with anatomical localization from CT. Current methods for PET-CT image analysis either process the modalities separately or fuse information from each modality based on knowledge about the image analysis task. These methods generally do not consider the spatially varying visual characteristics that encode different information across the different modalities, which have different priorities at different locations. For example, a high abnormal PET uptake in the lungs is more meaningful for tumor detection than physiological PET uptake in the heart. Our aim is to improve fusion of the complementary information in multi-modality PET-CT with a new supervised convolutional neural network (CNN) that learns to fuse complementary information for multi-modality medical image analysis. Our CNN first encodes modality-specific features and then uses them to derive a spatially varying fusion map that quantifies the relative importance of each modalitys features across different spatial locations. These fusion maps are then multiplied with the modality-specific feature maps to obtain a representation of the complementary multi-modality information at different locations, which can then be used for image analysis. We evaluated the ability of our CNN to detect and segment multiple regions with different fusion requirements using a dataset of PET-CT images of lung cancer. We compared our method to baseline techniques for multi-modality image fusion and segmentation. Our findings show that our CNN had a significantly higher foreground detection accuracy (99.29%, p < 0.05) than the fusion baselines and a significantly higher Dice score (63.85%) than recent PET-CT tumor segmentation methods.
Importance: Lung cancer is the leading cause of cancer mortality in the US, responsible for more deaths than breast, prostate, colon and pancreas cancer combined and it has been recently demonstrated that low-dose computed tomography (CT) screening of the chest can significantly reduce this death rate. Objective: To compare the performance of a deep learning model to state-of-the-art automated algorithms and radiologists as well as assessing the robustness of the algorithm in heterogeneous datasets. Design, Setting, and Participants: Three low-dose CT lung cancer screening datasets from heterogeneous sources were used, including National Lung Screening Trial (NLST, n=3410), Lahey Hospital and Medical Center (LHMC, n=3174) data, Kaggle competition data (from both stages, n=1595+505) and the University of Chicago data (UCM, a subset of NLST, annotated by radiologists, n=197). Relevant works on automated methods for Lung Cancer malignancy estimation have used significantly less data in size and diversity. At the first stage, our framework employs a nodule detector; while in the second stage, we use both the image area around the nodules and nodule features as inputs to a neural network that estimates the malignancy risk for the entire CT scan. We trained our two-stage algorithm on a part of the NLST dataset, and validated it on the other datasets. Results, Conclusions, and Relevance: The proposed deep learning model: (a) generalizes well across all three data sets, achieving AUC between 86% to 94%; (b) has better performance than the widely accepted PanCan Risk Model, achieving 11% better AUC score; (c) has improved performance compared to the state-of-the-art represented by the winners of the Kaggle Data Science Bowl 2017 competition on lung cancer screening; (d) has comparable performance to radiologists in estimating cancer risk at a patient level.
Recently, multi-task networks have shown to both offer additional estimation capabilities, and, perhaps more importantly, increased performance over single-task networks on a main/primary task. However, balancing the optimization criteria of multi-task networks across different tasks is an area of active exploration. Here, we extend a previously proposed 3D attention-based network with four additional multi-task subnetworks for the detection of lung cancer and four auxiliary tasks (diagnosis of asthma, chronic bronchitis, chronic obstructive pulmonary disease, and emphysema). We introduce and evaluate a learning policy, Periodic Focusing Learning Policy (PFLP), that alternates the dominance of tasks throughout the training. To improve performance on the primary task, we propose an Internal-Transfer Weighting (ITW) strategy to suppress the loss functions on auxiliary tasks for the final stages of training. To evaluate this approach, we examined 3386 patients (single scan per patient) from the National Lung Screening Trial (NLST) and de-identified data from the Vanderbilt Lung Screening Program, with a 2517/277/592 (scans) split for training, validation, and testing. Baseline networks include a single-task strategy and a multi-task strategy without adaptive weights (PFLP/ITW), while primary experiments are multi-task trials with either PFLP or ITW or both. On the test set for lung cancer prediction, the baseline single-task network achieved prediction AUC of 0.8080 and the multi-task baseline failed to converge (AUC 0.6720). However, applying PFLP helped multi-task network clarify and achieved test set lung cancer prediction AUC of 0.8402. Furthermore, our ITW technique boosted the PFLP enabled multi-task network and achieved an AUC of 0.8462 (McNemar test, p < 0.01).
Purpose: To characterize regional pulmonary function on CT images using a radiomic filtering approach. Methods: We develop a radiomic filtering technique to capture the image encoded regional pulmonary ventilation information on CT. The lung volumes were first segmented on 46 CT images. Then, a 3D sliding window kernel is implemented to map the impulse response of radiomic features. Specifically, for each voxel in the lungs, 53 radiomic features were calculated in such a rotationally-invariant 3D kernel to capture spatially-encoded information. Accordingly, each voxel coordinate is represented as a 53-dimensional feature vector, and each image is represented as an image tensor that we refer to as a feature map. To test the technique as a potential pulmonary biomarker, the Spearman correlation analysis is performed between the feature map and matched nuclear imaging measurements (Galligas PET or DTPA-SPECT) of lung ventilation. Results: Two features were found to be highly correlated with benchmark pulmonary ventilation function results based on the median of Spearman correlation coefficient () distribution. In particular, feature GLRLM-based Run Length Non-uniformity and GLCOM-based Sum Average achieved robust high correlation across 46 patients and both Galligas PET or DTPA-SPECT nuclear imaging modalities, with the range (median) of [0.05, 0.67] (0.46) and [0.21, 0.65] (0.45), respectively. Such results are comparable to other image-based pulmonary function quantification techniques. Conclusions: Our results provide evidence that local regions of sparsely encoded homogenous lung parenchyma on CT are associated with diminished radiotracer uptake and measured lung ventilation defects on PET/SPECT imaging. This finding demonstrates the potential of radiomics to serve as a non-invasive surrogate of regional lung function and provides hypothesis-generating data for future studies.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا