No Arabic abstract
Mammography remains the most prevalent imaging tool for early breast cancer screening. The language used to describe abnormalities in mammographic reports is based on the breast Imaging Reporting and Data System (BI-RADS). Assigning a correct BI-RADS category to each examined mammogram is a strenuous and challenging task for even experts. This paper proposes a new and effective computer-aided diagnosis (CAD) system to classify mammographic masses into four assessment categories in BI-RADS. The mass regions are first enhanced by means of histogram equalization and then semiautomatically segmented based on the region growing technique. A total of 130 handcrafted BI-RADS features are then extrcated from the shape, margin, and density of each mass, together with the mass size and the patients age, as mentioned in BI-RADS mammography. Then, a modified feature selection method based on the genetic algorithm (GA) is proposed to select the most clinically significant BI-RADS features. Finally, a back-propagation neural network (BPN) is employed for classification, and its accuracy is used as the fitness in GA. A set of 500 mammogram images from the digital database of screening mammography (DDSM) is used for evaluation. Our system achieves classification accuracy, positive predictive value, negative predictive value, and Matthews correlation coefficient of 84.5%, 84.4%, 94.8%, and 79.3%, respectively. To our best knowledge, this is the best current result for BI-RADS classification of breast masses in mammography, which makes the proposed system promising to support radiologists for deciding proper patient management based on the automatically assigned BI-RADS categories.
Objective: We develop a computer-aided diagnosis (CAD) system using deep learning approaches for lesion detection and classification on whole-slide images (WSIs) with breast cancer. The deep features being distinguishing in classification from the convolutional neural networks (CNN) are demonstrated in this study to provide comprehensive interpretability for the proposed CAD system using pathological knowledge. Methods: In the experiment, a total of 186 slides of WSIs were collected and classified into three categories: Non-Carcinoma, Ductal Carcinoma in Situ (DCIS), and Invasive Ductal Carcinoma (IDC). Instead of conducting pixel-wise classification into three classes directly, we designed a hierarchical framework with the multi-view scheme that performs lesion detection for region proposal at higher magnification first and then conducts lesion classification at lower magnification for each detected lesion. Results: The slide-level accuracy rate for three-category classification reaches 90.8% (99/109) through 5-fold cross-validation and achieves 94.8% (73/77) on the testing set. The experimental results show that the morphological characteristics and co-occurrence properties learned by the deep learning models for lesion classification are accordant with the clinical rules in diagnosis. Conclusion: The pathological interpretability of the deep features not only enhances the reliability of the proposed CAD system to gain acceptance from medical specialists, but also facilitates the development of deep learning frameworks for various tasks in pathology. Significance: This paper presents a CAD system for pathological image analysis, which fills the clinical requirements and can be accepted by medical specialists with providing its interpretability from the pathological perspective.
Low grade endometrial stromal sarcoma (LGESS) is rare form of cancer, accounting for about 0.2% of all uterine cancer cases. Approximately 75% of LGESS patients are initially misdiagnosed with leiomyoma, which is a type of benign tumor, also known as fibroids. In this research, uterine tissue biopsy images of potential LGESS patients are preprocessed using segmentation and staining normalization algorithms. A variety of classic machine learning and leading deep learning models are then applied to classify tissue images as either benign or cancerous. For the classic techniques considered, the highest classification accuracy we attain is about 0.85, while our best deep learning model achieves an accuracy of approximately 0.87. These results indicate that properly trained learning algorithms can play a useful role in the diagnosis of LGESS.
Current Computer-Aided Diagnosis (CAD) methods mainly depend on medical images. The clinical information, which usually needs to be considered in practical clinical diagnosis, has not been fully employed in CAD. In this paper, we propose a novel deep learning-based method for fusing Magnetic Resonance Imaging (MRI)/Computed Tomography (CT) images and clinical information for diagnostic tasks. Two paths of neural layers are performed to extract image features and clinical features, respectively, and at the same time clinical features are employed as the attention to guide the extraction of image features. Finally, these two modalities of features are concatenated to make decisions. We evaluate the proposed method on its applications to Alzheimers disease diagnosis, mild cognitive impairment converter prediction and hepatic microvascular invasion diagnosis. The encouraging experimental results prove the values of the image feature extraction guided by clinical features and the concatenation of two modalities of features for classification, which improve the performance of diagnosis effectively and stably.
Data scarcity and class imbalance are two fundamental challenges in many machine learning applications to healthcare. Breast cancer classification in mammography exemplifies these challenges, with a malignancy rate of around 0.5% in a screening population, which is compounded by the relatively small size of lesions (~1% of the image) in malignant cases. Simultaneously, the prevalence of screening mammography creates a potential abundance of non-cancer exams to use for training. Altogether, these characteristics lead to overfitting on cancer cases, while under-utilizing non-cancer data. Here, we present a novel generative adversarial network (GAN) model for data augmentation that can realistically synthesize and remove lesions on mammograms. With self-attention and semi-supervised learning components, the U-net-based architecture can generate high resolution (256x256px) outputs, as necessary for mammography. When augmenting the original training set with the GAN-generated samples, we find a significant improvement in malignancy classification performance on a test set of real mammogram patches. Overall, the empirical results of our algorithm and the relevance to other medical imaging paradigms point to potentially fruitful further applications.
Deep learning can promote the mammography-based computer-aided diagnosis (CAD) for breast cancers, but it generally suffers from the small sample size problem. Self-supervised learning (SSL) has shown its effectiveness in medical image analysis with limited training samples. However, the network model sometimes cannot be well pre-trained in the conventional SSL framework due to the limitation of the pretext task and fine-tuning mechanism. In this work, a Task-driven Self-supervised Bi-channel Networks (TSBN) framework is proposed to improve the performance of classification model the mammography-based CAD. In particular, a new gray-scale image mapping (GSIM) is designed as the pretext task, which embeds the class label information of mammograms into the image restoration task to improve discriminative feature representation. The proposed TSBN then innovatively integrates different network architecture, including the image restoration network and the classification network, into a unified SSL framework. It jointly trains the bi-channel network models and collaboratively transfers the knowledge from the pretext task network to the downstream task network with improved diagnostic accuracy. The proposed TSBN is evaluated on a public INbreast mammogram dataset. The experimental results indicate that it outperforms the conventional SSL and multi-task learning algorithms for diagnosis of breast cancers with limited samples.