ﻻ يوجد ملخص باللغة العربية
Medical image processing is one of the most important topics in the field of the Internet of Medical Things (IoMT). Recently, deep learning methods have carried out state-of-the-art performances on medical image tasks. However, conventional deep learning have two main drawbacks: 1) insufficient training data and 2) the domain mismatch between the training data and the testing data. In this paper, we propose a distant domain transfer learning (DDTL) method for medical image classification. Moreover, we apply our methods to a recent issue (Coronavirus diagnose). Several current studies indicate that lung Computed Tomography (CT) images can be used for a fast and accurate COVID-19 diagnosis. However, the well-labeled training data cannot be easily accessed due to the novelty of the disease and a number of privacy policies. Moreover, the proposed method has two components: Reduced-size Unet Segmentation model and Distant Feature Fusion (DFF) classification model. It is related to a not well-investigated but important transfer learning problem, termed Distant Domain Transfer Learning (DDTL). DDTL aims to make efficient transfers even when the domains or the tasks are entirely different. In this study, we develop a DDTL model for COVID-19 diagnose using unlabeled Office-31, Catech-256, and chest X-ray image data sets as the source data, and a small set of COVID-19 lung CT as the target data. The main contributions of this study: 1) the proposed method benefits from unlabeled data collected from distant domains which can be easily accessed, 2) it can effectively handle the distribution shift between the training data and the testing data, 3) it has achieved 96% classification accuracy, which is 13% higher classification accuracy than non-transfer algorithms, and 8% higher than existing transfer and distant transfer algorithms.
Transfer learning from natural image datasets, particularly ImageNet, using standard large models and corresponding pretrained weights has become a de-facto method for deep learning applications to medical imaging. However, there are fundamental diff
The purpose of this study is to analyze the efficacy of transfer learning techniques and transformer-based models as applied to medical natural language processing (NLP) tasks, specifically radiological text classification. We used 1,977 labeled head
Advances in computing power, deep learning architectures, and expert labelled datasets have spurred the development of medical imaging artificial intelligence systems that rival clinical experts in a variety of scenarios. The National Institutes of H
Transfer learning is a standard technique to improve performance on tasks with limited data. However, for medical imaging, the value of transfer learning is less clear. This is likely due to the large domain mismatch between the usual natural-image p
Recently, we have witnessed great progress in the field of medical imaging classification by adopting deep neural networks. However, the recent advanced models still require accessing sufficiently large and representative datasets for training, which