No Arabic abstract
As one of the most commonly ordered imaging tests, computed tomography (CT) scan comes with inevitable radiation exposure that increases the cancer risk to patients. However, CT image quality is directly related to radiation dose, thus it is desirable to obtain high-quality CT images with as little dose as possible. CT image denoising tries to obtain high dose like high-quality CT images (domain X) from low dose low-quality CTimages (domain Y), which can be treated as an image-to-image translation task where the goal is to learn the transform between a source domain X (noisy images) and a target domain Y (clean images). In this paper, we propose a multi-cycle-consistent adversarial network (MCCAN) that builds intermediate domains and enforces both local and global cycle-consistency for edge denoising of CT images. The global cycle-consistency couples all generators together to model the whole denoising process, while the local cycle-consistency imposes effective supervision on the process between adjacent domains. Experiments show that both local and global cycle-consistency are important for the success of MCCAN, which outperformsCCADN in terms of denoising quality with slightly less computation resource consumption.
Dynamic Contrast Enhanced-Magnetic Resonance Imaging (DCE-MRI) is widely used to complement ultrasound examinations and x-ray mammography during the early detection and diagnosis of breast cancer. However, images generated by various MRI scanners (e.g. GE Healthcare vs Siemens) differ both in intensity and noise distribution, preventing algorithms trained on MRIs from one scanner to generalize to data from other scanners successfully. We propose a method for image normalization to solve this problem. MRI normalization is challenging because it requires both normalizing intensity values and mapping between the noise distributions of different scanners. We utilize a cycle-consistent generative adversarial network to learn a bidirectional mapping between MRIs produced by GE Healthcare and Siemens scanners. This allows us learning the mapping between two different scanner types without matched data, which is not commonly available. To ensure the preservation of breast shape and structures within the breast, we propose two technical innovations. First, we incorporate a mutual information loss with the CycleGAN architecture to ensure that the structure of the breast is maintained. Second, we propose a modified discriminator architecture which utilizes a smaller field-of-view to ensure the preservation of finer details in the breast tissue. Quantitative and qualitative evaluations show that the second proposed method was able to consistently preserve a high level of detail in the breast structure while also performing the proper intensity normalization and noise mapping. Our results demonstrate that the proposed model can successfully learn a bidirectional mapping between MRIs produced by different vendors, potentially enabling improved accuracy of downstream computational algorithms for diagnosis and detection of breast cancer. All the data used in this study are publicly available.
The insufficiency of annotated medical imaging scans for cancer makes it challenging to train and validate data-hungry deep learning models in precision oncology. We propose a new richer generative adversarial network for free-form 3D tumor/lesion synthesis in computed tomography (CT) images. The network is composed of a new richer convolutional feature enhanced dilated-gated generator (RicherDG) and a hybrid loss function. The RicherDG has dilated-gated convolution layers to enable tumor-painting and to enlarge perceptive fields; and it has a novel richer convolutional feature association branch to recover multi-scale convolutional features especially from uncertain boundaries between tumor and surrounding healthy tissues. The hybrid loss function, which consists of a diverse range of losses, is designed to aggregate complementary information to improve optimization. We perform a comprehensive evaluation of the synthesis results on a wide range of public CT image datasets covering the liver, kidney tumors, and lung nodules. The qualitative and quantitative evaluations and ablation study demonstrated improved synthesizing results over advanced tumor synthesis methods.
Detecting COVID-19 in computed tomography (CT) or radiography images has been proposed as a supplement to the definitive RT-PCR test. We present a deep learning ensemble for detecting COVID-19 infection, combining slice-based (2D) and volume-based (3D) approaches. The 2D system detects the infection on each CT slice independently, combining them to obtain the patient-level decision via different methods (averaging and long-short term memory networks). The 3D system takes the whole CT volume to arrive to the patient-level decision in one step. A new high resolution chest CT scan dataset, called the IST-C dataset, is also collected in this work. The proposed ensemble, called IST-CovNet, obtains 90.80% accuracy and 0.95 AUC score overall on the IST-C dataset in detecting COVID-19 among normal controls and other types of lung pathologies; and 93.69% accuracy and 0.99 AUC score on the publicly available MosMed dataset that consists of COVID-19 scans and normal controls only. The system is deployed at Istanbul University Cerrahpasa School of Medicine.
The construction of three-dimensional multi-modal tissue maps provides an opportunity to spur interdisciplinary innovations across temporal and spatial scales through information integration. While the preponderance of effort is allocated to the cellular level and explore the changes in cell interactions and organizations, contextualizing findings within organs and systems is essential to visualize and interpret higher resolution linkage across scales. There is a substantial normal variation of kidney morphometry and appearance across body size, sex, and imaging protocols in abdominal computed tomography (CT). A volumetric atlas framework is needed to integrate and visualize the variability across scales. However, there is no abdominal and retroperitoneal organs atlas framework for multi-contrast CT. Hence, we proposed a high-resolution CT retroperitoneal atlas specifically optimized for the kidney across non-contrast CT and early arterial, late arterial, venous and delayed contrast enhanced CT. Briefly, we introduce a deep learning-based volume of interest extraction method and an automated two-stage hierarchal registration pipeline to register abdominal volumes to a high-resolution CT atlas template. To generate and evaluate the atlas, multi-contrast modality CT scans of 500 subjects (without reported history of renal disease, age: 15-50 years, 250 males & 250 females) were processed. We demonstrate a stable generalizability of the atlas template for integrating the normal kidney variation from small to large, across contrast modalities and populations with great variability of demographics. The linkage of atlas and demographics provided a better understanding of the variation of kidney anatomy across populations.
Recently, accurate mandible segmentation in CT scans based on deep learning methods has attracted much attention. However, there still exist two major challenges, namely, metal artifacts among mandibles and large variations in shape or size among individuals. To address these two challenges, we propose a recurrent segmentation convolutional neural network (RSegCNN) that embeds segmentation convolutional neural network (SegCNN) into the recurrent neural network (RNN) for robust and accurate segmentation of the mandible. Such a design of the system takes into account the similarity and continuity of the mandible shapes captured in adjacent image slices in CT scans. The RSegCNN infers the mandible information based on the recurrent structure with the embedded encoder-decoder segmentation (SegCNN) components. The recurrent structure guides the system to exploit relevant and important information from adjacent slices, while the SegCNN component focuses on the mandible shapes from a single CT slice. We conducted extensive experiments to evaluate the proposed RSegCNN on two head and neck CT datasets. The experimental results show that the RSegCNN is significantly better than the state-of-the-art models for accurate mandible segmentation.