ترغب بنشر مسار تعليمي؟ اضغط هنا

Visual enhancement of Cone-beam CT by use of CycleGAN

54   0   0.0 ( 0 )
 نشر من قبل Shizuo Kaji
 تاريخ النشر 2019
والبحث باللغة English




اسأل ChatGPT حول البحث

Cone-beam computed tomography (CBCT) offers advantages over conventional fan-beam CT in that it requires a shorter time and less exposure to obtain images. CBCT has found a wide variety of applications in patient positioning for image-guided radiation therapy, extracting radiomic information for designing patient-specific treatment, and computing fractional dose distributions for adaptive radiation therapy. However, CBCT images suffer from low soft-tissue contrast, noise, and artifacts compared to conventional fan-beam CT images. Therefore, it is essential to improve the image quality of CBCT. In this paper, we propose a synthetic approach to translate CBCT images with deep neural networks. Our method requires only unpaired and unaligned CBCT images and planning fan-beam CT (PlanCT) images for training. Once trained, 3D reconstructed CBCT images can be directly translated to high-quality PlanCT-like images. We demonstrate the effectiveness of our method with images obtained from 24 prostate patients, and we provide a statistical and visual comparison. The image quality of the translated images shows substantial improvement in voxel values, spatial uniformity, and artifact suppression compared to those of the original CBCT. The anatomical structures of the original CBCT images were also well preserved in the translated images. Our method enables more accurate adaptive radiation therapy, and opens up new applications for CBCT that hinge on high-quality images.



قيم البحث

اقرأ أيضاً

167 - Xin Zhen , Xuejun Gu , Hao Yan 2012
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons.
188 - Hao Yan , Xiaoyu Wang , Wotao Yin 2012
Patient respiratory signal associated with the cone beam CT (CBCT) projections is important for lung cancer radiotherapy. In contrast to monitoring an external surrogate of respiration, such signal can be extracted directly from the CBCT projections. In this paper, we propose a novel local principle component analysis (LPCA) method to extract the respiratory signal by distinguishing the respiration motion-induced content change from the gantry rotation-induced content change in the CBCT projections. The LPCA method is evaluated by comparing with three state-of-the-art projection-based methods, namely, the Amsterdam Shroud (AS) method, the intensity analysis (IA) method, and the Fourier-transform based phase analysis (FT-p) method. The clinical CBCT projection data of eight patients, acquired under various clinical scenarios, were used to investigate the performance of each method. We found that the proposed LPCA method has demonstrated the best overall performance for cases tested and thus is a promising technique for extracting respiratory signal. We also identified the applicability of each existing method.
Commercial iterative reconstruction techniques on modern CT scanners target radiation dose reduction but there are lingering concerns over their impact on image appearance and low contrast detectability. Recently, machine learning, especially deep le arning, has been actively investigated for CT. Here we design a novel neural network architecture for low-dose CT (LDCT) and compare it with commercial iterative reconstruction methods used for standard of care CT. While popular neural networks are trained for end-to-end mapping, driven by big data, our novel neural network is intended for end-to-process mapping so that intermediate image targets are obtained with the associated search gradients along which the final image targets are gradually reached. This learned dynamic process allows to include radiologists in the training loop to optimize the LDCT denoising workflow in a task-specific fashion with the denoising depth as a key parameter. Our progressive denoising network was trained with the Mayo LDCT Challenge Dataset, and tested on images of the chest and abdominal regions scanned on the CT scanners made by three leading CT vendors. The best deep learning based reconstructions are systematically compared to the best iterative reconstructions in a double-blinded reader study. It is found that our deep learning approach performs either comparably or favorably in terms of noise suppression and structural fidelity, and runs orders of magnitude faster than the commercial iterative CT reconstruction algorithms.
Thanks to large-scale labeled training data, deep neural networks (DNNs) have obtained remarkable success in many vision and multimedia tasks. However, because of the presence of domain shift, the learned knowledge of the well-trained DNNs cannot be well generalized to new domains or datasets that have few labels. Unsupervised domain adaptation (UDA) studies the problem of transferring models trained on one labeled source domain to another unlabeled target domain. In this paper, we focus on UDA in visual emotion analysis for both emotion distribution learning and dominant emotion classification. Specifically, we design a novel end-to-end cycle-consistent adversarial model, termed CycleEmotionGAN++. First, we generate an adapted domain to align the source and target domains on the pixel-level by improving CycleGAN with a multi-scale structured cycle-consistency loss. During the image translation, we propose a dynamic emotional semantic consistency loss to preserve the emotion labels of the source images. Second, we train a transferable task classifier on the adapted domain with feature-level alignment between the adapted and target domains. We conduct extensive UDA experiments on the Flickr-LDL & Twitter-LDL datasets for distribution learning and ArtPhoto & FI datasets for emotion classification. The results demonstrate the significant improvements yielded by the proposed CycleEmotionGAN++ as compared to state-of-the-art UDA approaches.
Despite the widespread availability of in-treatment room cone beam computed tomography (CBCT) imaging, due to the lack of reliable segmentation methods, CBCT is only used for gross set up corrections in lung radiotherapies. Accurate and reliable auto -segmentation tools could potentiate volumetric response assessment and geometry-guided adaptive radiation therapies. Therefore, we developed a new deep learning CBCT lung tumor segmentation method. Methods: The key idea of our approach called cross modality educed distillation (CMEDL) is to use magnetic resonance imaging (MRI) to guide a CBCT segmentation network training to extract more informative features during training. We accomplish this by training an end-to-end network comprised of unpaired domain adaptation (UDA) and cross-domain segmentation distillation networks (SDN) using unpaired CBCT and MRI datasets. Feature distillation regularizes the student network to extract CBCT features that match the statistical distribution of MRI features extracted by the teacher network and obtain better differentiation of tumor from background.} We also compared against an alternative framework that used UDA with MR segmentation network, whereby segmentation was done on the synthesized pseudo MRI representation. All networks were trained with 216 weekly CBCTs and 82 T2-weighted turbo spin echo MRI acquired from different patient cohorts. Validation was done on 20 weekly CBCTs from patients not used in training. Independent testing was done on 38 weekly CBCTs from patients not used in training or validation. Segmentation accuracy was measured using surface Dice similarity coefficient (SDSC) and Hausdroff distance at 95th percentile (HD95) metrics.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا