Do you want to publish a course? Click here

Automatic deep learning-based normalization of breast dynamic contrast-enhanced magnetic resonance images

86   0   0.0 ( 0 )
 Added by Jun Zhang
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Objective: To develop an automatic image normalization algorithm for intensity correction of images from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) acquired by different MRI scanners with various imaging parameters, using only image information. Methods: DCE-MR images of 460 subjects with breast cancer acquired by different scanners were used in this study. Each subject had one T1-weighted pre-contrast image and three T1-weighted post-contrast images available. Our normalization algorithm operated under the assumption that the same type of tissue in different patients should be represented by the same voxel value. We used four tissue/material types as the anchors for the normalization: 1) air, 2) fat tissue, 3) dense tissue, and 4) heart. The algorithm proceeded in the following two steps: First, a state-of-the-art deep learning-based algorithm was applied to perform tissue segmentation accurately and efficiently. Then, based on the segmentation results, a subject-specific piecewise linear mapping function was applied between the anchor points to normalize the same type of tissue in different patients into the same intensity ranges. We evaluated the algorithm with 300 subjects used for training and the rest used for testing. Results: The application of our algorithm to images with different scanning parameters resulted in highly improved consistency in pixel values and extracted radiomics features. Conclusion: The proposed image normalization strategy based on tissue segmentation can perform intensity correction fully automatically, without the knowledge of the scanner parameters. Significance: We have thoroughly tested our algorithm and showed that it successfully normalizes the intensity of DCE-MR images. We made our software publicly available for others to apply in their analyses.



rate research

Read More

Acute aortic syndrome (AAS) is a group of life threatening conditions of the aorta. We have developed an end-to-end automatic approach to detect AAS in computed tomography (CT) images. Our approach consists of two steps. At first, we extract N cross sections along the segmented aorta centerline for each CT scan. These cross sections are stacked together to form a new volume which is then classified using two different classifiers, a 3D convolutional neural network (3D CNN) and a multiple instance learning (MIL). We trained, validated, and compared two models on 2291 contrast CT volumes. We tested on a set aside cohort of 230 normal and 50 positive CT volumes. Our models detected AAS with an Area under Receiver Operating Characteristic curve (AUC) of 0.965 and 0.985 using 3DCNN and MIL, respectively.
Current analysis of tumor proliferation, the most salient prognostic biomarker for invasive breast cancer, is limited to subjective mitosis counting by pathologists in localized regions of tissue images. This study presents the first data-driven integrative approach to characterize the severity of tumor growth and spread on a categorical and molecular level, utilizing multiple biologically salient deep learning classifiers to develop a comprehensive prognostic model. Our approach achieves pathologist-level performance on three-class categorical tumor severity prediction. It additionally pioneers prediction of molecular expression data from a tissue image, obtaining a Spearmans rank correlation coefficient of 0.60 with ex vivo mean calculated RNA expression. Furthermore, our framework is applied to identify over two hundred unprecedented biomarkers critical to the accurate assessment of tumor proliferation, validating our proposed integrative pipeline as the first to holistically and objectively analyze histopathological images.
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is used to quantify perfusion and vascular permeability. In most cases a bolus arrival time (BAT) delay exists between the arterial input function (AIF) and the contrast agent arrival in the tissue of interest which needs to be estimated. Existing methods for BAT estimation are tailored to tissue concentration curves which have a fast upslope to the peak as frequently observed in patient data. However, they may give poor results for curves that do not have this characteristic shape such as tissue concentration curves of small animals. In this paper, we propose a novel method for BAT estimation of signals that do not have a fast upslope to their peak. The model is based on splines which are able to adapt to a large variety of concentration curves. Furthermore, the method estimates BATs on a continuous time scale. All relevant model parameters are automatically determined by generalized cross validation. We use simulated concentration curves of small animal and patient settings to assess the accuracy and robustness of our approach. The proposed method outperforms a state-of-the-art method for small animal data and it gives competitive results for patient data. Finally, it is tested on in vivo acquired rat data where accuracy of BAT estimation was also improved upon the state-of-the-art method. The results indicate that the proposed method is suitable for accurate BAT estimation of DCE-MRI data, especially for small animals.
Background and objective: Combined evaluation of lumbosacral structures (e.g. nerves, bone) on multimodal radiographic images is routinely conducted prior to spinal surgery and interventional procedures. Generally, magnetic resonance imaging is conducted to differentiate nerves, while computed tomography (CT) is used to observe bony structures. The aim of this study is to investigate the feasibility of automatically segmenting lumbosacral structures (e.g. nerves & bone) on non-contrast CT with deep learning. Methods: a total of 50 cases with spinal CT were manually labeled for lumbosacral nerves and bone with Slicer 4.8. The ratio of training: validation: testing is 32:8:10. A 3D-Unet is adopted to build the model SPINECT for automatically segmenting lumbosacral structures. Pixel accuracy, IoU, and Dice score are used to assess the segmentation performance of lumbosacral structures. Results: the testing results reveals successful segmentation of lumbosacral bone and nerve on CT. The average pixel accuracy is 0.940 for bone and 0.918 for nerve. The average IoU is 0.897 for bone and 0.827 for nerve. The dice score is 0.945 for bone and 0.905 for nerve. Conclusions: this pilot study indicated that automatic segmenting lumbosacral structures (nerves and bone) on non-contrast CT is feasible and may have utility for planning and navigating spinal interventions and surgery.
Breast cancer is one of the leading causes of mortality in women. Early detection and treatment are imperative for improving survival rates, which have steadily increased in recent years as a result of more sophisticated computer-aided-diagnosis (CAD) systems. A critical component of breast cancer diagnosis relies on histopathology, a laborious and highly subjective process. Consequently, CAD systems are essential to reduce inter-rater variability and supplement the analyses conducted by specialists. In this paper, a transfer-learning based approach is proposed, for the task of breast histology image classification into four tissue sub-types, namely, normal, benign, textit{in situ} carcinoma and invasive carcinoma. The histology images, provided as part of the BACH 2018 grand challenge, were first normalized to correct for color variations resulting from inconsistencies during slide preparation. Subsequently, image patches were extracted and used to fine-tune Google`s Inception-V3 and ResNet50 convolutional neural networks (CNNs), both pre-trained on the ImageNet database, enabling them to learn domain-specific features, necessary to classify the histology images. The ResNet50 network (based on residual learning) achieved a test classification accuracy of 97.50% for four classes, outperforming the Inception-V3 network which achieved an accuracy of 91.25%.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا