ترغب بنشر مسار تعليمي؟ اضغط هنا

TUNA-Net: Task-oriented UNsupervised Adversarial Network for Disease Recognition in Cross-Domain Chest X-rays

102   0   0.0 ( 0 )
 نشر من قبل Yuxing Tang
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we exploit the unsupervised domain adaptation problem for radiology image interpretation across domains. Specifically, we study how to adapt the disease recognition model from a labeled source domain to an unlabeled target domain, so as to reduce the effort of labeling each new dataset. To address the shortcoming of cross-domain, unpaired image-to-image translation methods which typically ignore class-specific semantics, we propose a task-driven, discriminatively trained, cycle-consistent generative adversarial network, termed TUNA-Net. It is able to preserve 1) low-level details, 2) high-level semantic information and 3) mid-level feature representation during the image-to-image translation process, to favor the target disease recognition task. The TUNA-Net framework is general and can be readily adapted to other learning tasks. We evaluate the proposed framework on two public chest X-ray datasets for pneumonia recognition. The TUNA-Net model can adapt labeled adult chest X-rays in the source domain such that they appear as if they were drawn from pediatric X-rays in the unlabeled target domain, while preserving the disease semantics. Extensive experiments show the superiority of the proposed method as compared to state-of-the-art unsupervised domain adaptation approaches. Notably, TUNA-Net achieves an AUC of 96.3% for pediatric pneumonia classification, which is very close to that of the supervised approach (98.1%), but without the need for labels on the target domain.



قيم البحث

اقرأ أيضاً

A number of methods based on deep learning have been applied to medical image segmentation and have achieved state-of-the-art performance. Due to the importance of chest x-ray data in studying COVID-19, there is a demand for state-of-the-art models c apable of precisely segmenting soft tissue on the chest x-rays. The dataset for exploring best segmentation model is from Montgomery and Shenzhen hospital which had opened in 2014. The most famous technique is U-Net which has been used to many medical datasets including the Chest X-rays. However, most variant U-Nets mainly focus on extraction of contextual information and skip connections. There is still a large space for improving extraction of spatial features. In this paper, we propose a dual encoder fusion U-Net framework for Chest X-rays based on Inception Convolutional Neural Network with dilation, Densely Connected Recurrent Convolutional Neural Network, which is named DEFU-Net. The densely connected recurrent path extends the network deeper for facilitating contextual feature extraction. In order to increase the width of network and enrich representation of features, the inception blocks with dilation are adopted. The inception blocks can capture globally and locally spatial information from various receptive fields. At the same time, the two paths are fused by summing features, thus preserving the contextual and spatial information for decoding part. This multi-learning-scale model is benefiting in Chest X-ray dataset from two different manufacturers (Montgomery and Shenzhen hospital). The DEFU-Net achieves the better performance than basic U-Net, residual U-Net, BCDU-Net, R2U-Net and attention R2U-Net. This model has proved the feasibility for mixed dataset and approaches state-of-the-art. The source code for this proposed framework is public https://github.com/uceclz0/DEFU-Net.
The novel Coronavirus Disease 2019 (COVID-19) is a global pandemic disease spreading rapidly around the world. A robust and automatic early recognition of COVID-19, via auxiliary computer-aided diagnostic tools, is essential for disease cure and cont rol. The chest radiography images, such as Computed Tomography (CT) and X-ray, and deep Convolutional Neural Networks (CNNs), can be a significant and useful material for designing such tools. However, designing such an automated tool is challenging as a massive number of manually annotated datasets are not publicly available yet, which is the core requirement of supervised learning systems. In this article, we propose a robust CNN-based network, called CVR-Net (Coronavirus Recognition Network), for the automatic recognition of the coronavirus from CT or X-ray images. The proposed end-to-end CVR-Net is a multi-scale-multi-encoder ensemble model, where we have aggregated the outputs from two different encoders and their different scales to obtain the final prediction probability. We train and test the proposed CVR-Net on three different datasets, where the images have collected from different open-source repositories. We compare our proposed CVR-Net with state-of-the-art methods, which are trained and tested on the same datasets. We split three datasets into five different tasks, where each task has a different number of classes, to evaluate the multi-tasking CVR-Net. Our model achieves an overall F1-score & accuracy of 0.997 & 0.998; 0.963 & 0.964; 0.816 & 0.820; 0.961 & 0.961; and 0.780 & 0.780, respectively, for task-1 to task-5. As the CVR-Net provides promising results on the small datasets, it can be an auspicious computer-aided diagnostic tool for the diagnosis of coronavirus to assist the clinical practitioners and radiologists. Our source codes and model are publicly available at https://github.com/kamruleee51/CVR-Net.
Chest radiography is one of the most common types of diagnostic radiology exams, which is critical for screening and diagnosis of many different thoracic diseases. Specialized algorithms have been developed to detect several specific pathologies such as lung nodule or lung cancer. However, accurately detecting the presence of multiple diseases from chest X-rays (CXRs) is still a challenging task. This paper presents a supervised multi-label classification framework based on deep convolutional neural networks (CNNs) for predicting the risk of 14 common thoracic diseases. We tackle this problem by training state-of-the-art CNNs that exploit dependencies among abnormality labels. We also propose to use the label smoothing technique for a better handling of uncertain samples, which occupy a significant portion of almost every CXR dataset. Our model is trained on over 200,000 CXRs of the recently released CheXpert dataset and achieves a mean area under the curve (AUC) of 0.940 in predicting 5 selected pathologies from the validation set. This is the highest AUC score yet reported to date. The proposed method is also evaluated on the independent test set of the CheXpert competition, which is composed of 500 CXR studies annotated by a panel of 5 experienced radiologists. The performance is on average better than 2.6 out of 3 other individual radiologists with a mean AUC of 0.930, which ranks first on the CheXpert leaderboard at the time of writing this paper.
COVID-19 spread across the globe at an immense rate has left healthcare systems incapacitated to diagnose and test patients at the needed rate. Studies have shown promising results for detection of COVID-19 from viral bacterial pneumonia in chest X-r ays. Automation of COVID-19 testing using medical images can speed up the testing process of patients where health care systems lack sufficient numbers of the reverse-transcription polymerase chain reaction (RT-PCR) tests. Supervised deep learning models such as convolutional neural networks (CNN) need enough labeled data for all classes to correctly learn the task of detection. Gathering labeled data is a cumbersome task and requires time and resources which could further strain health care systems and radiologists at the early stages of a pandemic such as COVID-19. In this study, we propose a randomized generative adversarial network (RANDGAN) that detects images of an unknown class (COVID-19) from known and labelled classes (Normal and Viral Pneumonia) without the need for labels and training data from the unknown class of images (COVID-19). We used the largest publicly available COVID-19 chest X-ray dataset, COVIDx, which is comprised of Normal, Pneumonia, and COVID-19 images from multiple public databases. In this work, we use transfer learning to segment the lungs in the COVIDx dataset. Next, we show why segmentation of the region of interest (lungs) is vital to correctly learn the task of classification, specifically in datasets that contain images from different resources as it is the case for the COVIDx dataset. Finally, we show improved results in detection of COVID-19 cases using our generative model (RANDGAN) compared to conventional generative adversarial networks (GANs) for anomaly detection in medical images, improving the area under the ROC curve from 0.71 to 0.77.
Thoracic disease detection from chest radiographs using deep learning methods has been an active area of research in the last decade. Most previous methods attempt to focus on the diseased organs of the image by identifying spatial regions responsibl e for significant contributions to the models prediction. In contrast, expert radiologists first locate the prominent anatomical structures before determining if those regions are anomalous. Therefore, integrating anatomical knowledge within deep learning models could bring substantial improvement in automatic disease classification. This work proposes an anatomy-aware attention-based architecture named Anatomy X-Net, that prioritizes the spatial features guided by the pre-identified anatomy regions. We leverage a semi-supervised learning method using the JSRT dataset containing organ-level annotation to obtain the anatomical segmentation masks (for lungs and heart) for the NIH and CheXpert datasets. The proposed Anatomy X-Net uses the pre-trained DenseNet-121 as the backbone network with two corresponding structured modules, the Anatomy Aware Attention (AAA) and Probabilistic Weighted Average Pooling (PWAP), in a cohesive framework for anatomical attention learning. Our proposed method sets new state-of-the-art performance on the official NIH test set with an AUC score of 0.8439, proving the efficacy of utilizing the anatomy segmentation knowledge to improve the thoracic disease classification. Furthermore, the Anatomy X-Net yields an averaged AUC of 0.9020 on the Stanford CheXpert dataset, improving on existing methods that demonstrate the generalizability of the proposed framework.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا