ترغب بنشر مسار تعليمي؟ اضغط هنا

Diagnosis of Coronavirus Disease 2019 (COVID-19) with Structured Latent Multi-View Representation Learning

345   0   0.0 ( 0 )
 نشر من قبل Feng Shi
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Recently, the outbreak of Coronavirus Disease 2019 (COVID-19) has spread rapidly across the world. Due to the large number of affected patients and heavy labor for doctors, computer-aided diagnosis with machine learning algorithm is urgently needed, and could largely reduce the efforts of clinicians and accelerate the diagnosis process. Chest computed tomography (CT) has been recognized as an informative tool for diagnosis of the disease. In this study, we propose to conduct the diagnosis of COVID-19 with a series of features extracted from CT images. To fully explore multiple features describing CT images from different views, a unified latent representation is learned which can completely encode information from different aspects of features and is endowed with promising class structure for separability. Specifically, the completeness is guaranteed with a group of backward neural networks (each for one type of features), while by using class labels the representation is enforced to be compact within COVID-19/community-acquired pneumonia (CAP) and also a large margin is guaranteed between different types of pneumonia. In this way, our model can well avoid overfitting compared to the case of directly projecting highdimensional features into classes. Extensive experimental results show that the proposed method outperforms all comparison methods, and rather stable performances are observed when varying the numbers of training data.



قيم البحث

اقرأ أيضاً

Novel coronavirus (COVID-19) outbreak, has raised a calamitous situation all over the world and has become one of the most acute and severe ailments in the past hundred years. The prevalence rate of COVID-19 is rapidly rising every day throughout the globe. Although no vaccines for this pandemic have been discovered yet, deep learning techniques proved themselves to be a powerful tool in the arsenal used by clinicians for the automatic diagnosis of COVID-19. This paper aims to overview the recently developed systems based on deep learning techniques using different medical imaging modalities like Computer Tomography (CT) and X-ray. This review specifically discusses the systems developed for COVID-19 diagnosis using deep learning techniques and provides insights on well-known data sets used to train these networks. It also highlights the data partitioning techniques and various performance measures developed by researchers in this field. A taxonomy is drawn to categorize the recent works for proper insight. Finally, we conclude by addressing the challenges associated with the use of deep learning methods for COVID-19 detection and probable future trends in this research area. This paper is intended to provide experts (medical or otherwise) and technicians with new insights into the ways deep learning techniques are used in this regard and how they potentially further works in combatting the outbreak of COVID-19.
Convolutional neural networks are showing promise in the automatic diagnosis of thoracic pathologies on chest x-rays. Their black-box nature has sparked many recent works to explain the prediction via input feature attribution methods (aka saliency m ethods). However, input feature attribution methods merely identify the importance of input regions for the prediction and lack semantic interpretation of model behavior. In this work, we first identify the semantics associated with internal units (feature maps) of the network. We proceed to investigate the following questions; Does a regression model that is only trained with COVID-19 severity scores implicitly learn visual patterns associated with thoracic pathologies? Does a network that is trained on weakly labeled data (e.g. healthy, unhealthy) implicitly learn pathologies? Moreover, we investigate the effect of pretraining and data imbalance on the interpretability of learned features. In addition to the analysis, we propose semantic attribution to semantically explain each prediction. We present our findings using publicly available chest pathologies (CheXpert, NIH ChestX-ray8) and COVID-19 datasets (BrixIA, and COVID-19 chest X-ray segmentation dataset). The Code is publicly available.
Radiological image is currently adopted as the visual evidence for COVID-19 diagnosis in clinical. Using deep models to realize automated infection measurement and COVID-19 diagnosis is important for faster examination based on radiological imaging. Unfortunately, collecting large training data systematically in the early stage is difficult. To address this problem, we explore the feasibility of learning deep models for COVID-19 diagnosis from a single radiological image by resorting to synthesizing diverse radiological images. Specifically, we propose a novel conditional generative model, called CoSinGAN, which can be learned from a single radiological image with a given condition, i.e., the annotations of the lung and COVID-19 infection. Our CoSinGAN is able to capture the conditional distribution of visual finds of COVID-19 infection, and further synthesize diverse and high-resolution radiological images that match the input conditions precisely. Both deep classification and segmentation networks trained on synthesized samples from CoSinGAN achieve notable detection accuracy of COVID-19 infection. Such results are significantly better than the counterparts trained on the same extremely small number of real samples (1 or 2 real samples) by using strong data augmentation, and approximate to the counterparts trained on large dataset (2846 real images). It confirms our method can significantly reduce the performance gap between deep models trained on extremely small dataset and on large dataset, and thus has the potential to realize learning COVID-19 diagnosis from few radiological images in the early stage of COVID-19 pandemic. Our codes are made publicly available at https://github.com/PengyiZhang/CoSinGAN.
The outbreak of novel coronavirus disease (COVID- 19) has claimed millions of lives and has affected all aspects of human life. This paper focuses on the application of deep learning (DL) models to medical imaging and drug discovery for managing COVI D-19 disease. In this article, we detail various medical imaging-based studies such as X-rays and computed tomography (CT) images along with DL methods for classifying COVID-19 affected versus pneumonia. The applications of DL techniques to medical images are further described in terms of image localization, segmentation, registration, and classification leading to COVID-19 detection. The reviews of recent papers indicate that the highest classification accuracy of 99.80% is obtained when InstaCovNet-19 DL method is applied to an X-ray dataset of 361 COVID-19 patients, 362 pneumonia patients and 365 normal people. Furthermore, it can be seen that the best classification accuracy of 99.054% can be achieved when EDL_COVID DL method is applied to a CT image dataset of 7500 samples where COVID-19 patients, lung tumor patients and normal people are equal in number. Moreover, we illustrate the potential DL techniques in drug or vaccine discovery in combating the coronavirus. Finally, we address a number of problems, concerns and future research directions relevant to DL applications for COVID-19.
The newly identified Coronavirus pneumonia, subsequently termed COVID-19, is highly transmittable and pathogenic with no clinically approved antiviral drug or vaccine available for treatment. The most common symptoms of COVID-19 are dry cough, sore t hroat, and fever. Symptoms can progress to a severe form of pneumonia with critical complications, including septic shock, pulmonary edema, acute respiratory distress syndrome and multi-organ failure. While medical imaging is not currently recommended in Canada for primary diagnosis of COVID-19, computer-aided diagnosis systems could assist in the early detection of COVID-19 abnormalities and help to monitor the progression of the disease, potentially reduce mortality rates. In this study, we compare popular deep learning-based feature extraction frameworks for automatic COVID-19 classification. To obtain the most accurate feature, which is an essential component of learning, MobileNet, DenseNet, Xception, ResNet, InceptionV3, InceptionResNetV2, VGGNet, NASNet were chosen amongst a pool of deep convolutional neural networks. The extracted features were then fed into several machine learning classifiers to classify subjects as either a case of COVID-19 or a control. This approach avoided task-specific data pre-processing methods to support a better generalization ability for unseen data. The performance of the proposed method was validated on a publicly available COVID-19 dataset of chest X-ray and CT images. The DenseNet121 feature extractor with Bagging tree classifier achieved the best performance with 99% classification accuracy. The second-best learner was a hybrid of the a ResNet50 feature extractor trained by LightGBM with an accuracy of 98%.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا