Do you want to publish a course? Click here

2D Convolutional Neural Networks for 3D Digital Breast Tomosynthesis Classification

136   0   0.0 ( 0 )
 Added by Yu Zhang
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Automated methods for breast cancer detection have focused on 2D mammography and have largely ignored 3D digital breast tomosynthesis (DBT), which is frequently used in clinical practice. The two key challenges in developing automated methods for DBT classification are handling the variable number of slices and retaining slice-to-slice changes. We propose a novel deep 2D convolutional neural network (CNN) architecture for DBT classification that simultaneously overcomes both challenges. Our approach operates on the full volume, regardless of the number of slices, and allows the use of pre-trained 2D CNNs for feature extraction, which is important given the limited amount of annotated training data. In an extensive evaluation on a real-world clinical dataset, our approach achieves 0.854 auROC, which is 28.80% higher than approaches based on 3D CNNs. We also find that these improvements are stable across a range of model configurations.



rate research

Read More

Breast cancer is the malignant tumor that causes the highest number of cancer deaths in females. Digital mammograms (DM or 2D mammogram) and digital breast tomosynthesis (DBT or 3D mammogram) are the two types of mammography imagery that are used in clinical practice for breast cancer detection and diagnosis. Radiologists usually read both imaging modalities in combination; however, existing computer-aided diagnosis tools are designed using only one imaging modality. Inspired by clinical practice, we propose an innovative convolutional neural network (CNN) architecture for breast cancer classification, which uses both 2D and 3D mammograms, simultaneously. Our experiment shows that the proposed method significantly improves the performance of breast cancer classification. By assembling three CNN classifiers, the proposed model achieves 0.97 AUC, which is 34.72% higher than the methods using only one imaging modality.
Prostate cancer is one of the most common forms of cancer and the third leading cause of cancer death in North America. As an integrated part of computer-aided detection (CAD) tools, diffusion-weighted magnetic resonance imaging (DWI) has been intensively studied for accurate detection of prostate cancer. With deep convolutional neural networks (CNNs) significant success in computer vision tasks such as object detection and segmentation, different CNNs architectures are increasingly investigated in medical imaging research community as promising solutions for designing more accurate CAD tools for cancer detection. In this work, we developed and implemented an automated CNNs-based pipeline for detection of clinically significant prostate cancer (PCa) for a given axial DWI image and for each patient. DWI images of 427 patients were used as the dataset, which contained 175 patients with PCa and 252 healthy patients. To measure the performance of the proposed pipeline, a test set of 108 (out of 427) patients were set aside and not used in the training phase. The proposed pipeline achieved area under the receiver operating characteristic curve (AUC) of 0.87 (95% Confidence Interval (CI): 0.84-0.90) and 0.84 (95% CI: 0.76-0.91) at slice level and patient level, respectively.
Measurements of breast density have the potential to improve the efficiency and reduce the cost of screening mammography through personalized screening. Breast density has traditionally been evaluated from the dense area in a mammogram, but volumetric assessment methods, which measure the volumetric fraction of fibro-glandular tissue in the breast, are potentially more consistent and physically sound. The purpose of the present study is to evaluate a method for measuring the volumetric breast density using photon-counting spectral tomosynthesis. The performance of the method was evaluated using phantom measurements and clinical data from a small population (n=18). The precision was determined to 2.4 percentage points (pp) of volumetric breast density. Strong correlations were observed between contralateral (R^2=0.95) and ipsilateral (R^2=0.96) breast-density measurements. The measured breast density was anti-correlated to breast thickness, as expected, and exhibited a skewed distribution in the range [3.7%, 55%] and with a median of 18%. We conclude that the method yields promising results that are consistent with expectations. The relatively high precision of the method may enable novel applications such as treatment monitoring.
188 - Weiwei Zong , Joon Lee , Chang Liu 2019
Deep learning models have had a great success in disease classifications using large data pools of skin cancer images or lung X-rays. However, data scarcity has been the roadblock of applying deep learning models directly on prostate multiparametric MRI (mpMRI). Although model interpretation has been heavily studied for natural images for the past few years, there has been a lack of interpretation of deep learning models trained on medical images. This work designs a customized workflow for the small and imbalanced data set of prostate mpMRI where features were extracted from a deep learning model and then analyzed by a traditional machine learning classifier. In addition, this work contributes to revealing how deep learning models interpret mpMRI for prostate cancer patients stratification.
Skullstripping is defined as the task of segmenting brain tissue from a full head magnetic resonance image~(MRI). It is a critical component in neuroimage processing pipelines. Downstream deformable registration and whole brain segmentation performance is highly dependent on accurate skullstripping. Skullstripping is an especially challenging task for infant~(age range 0--18 months) head MRI images due to the significant size and shape variability of the head and the brain in that age range. Infant brain tissue development also changes the $T_1$-weighted image contrast over time, making consistent skullstripping a difficult task. Existing tools for adult brain MRI skullstripping are ill equipped to handle these variations and a specialized infant MRI skullstripping algorithm is necessary. In this paper, we describe a supervised skullstripping algorithm that utilizes three trained fully convolutional neural networks~(CNN), each of which segments 2D $T_1$-weighted slices in axial, coronal, and sagittal views respectively. The three probabilistic segmentations in the three views are linearly fused and thresholded to produce a final brain mask. We compared our method to existing adult and infant skullstripping algorithms and showed significant improvement based on Dice overlap metric~(average Dice of 0.97) with a manually labeled ground truth data set. Label fusion experiments on multiple, unlabeled data sets show that our method is consistent and has fewer failure modes. In addition, our method is computationally very fast with a run time of 30 seconds per image on NVidia P40/P100/Quadro 4000 GPUs.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا