Do you want to publish a course? Click here

Deep learning based CT-to-CBCT deformable image registration for autosegmentation in head and neck adaptive radiation therapy

155   0   0.0 ( 0 )
 Added by Xiao Liang
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

The purpose of this study is to develop a deep learning based method that can automatically generate segmentations on cone-beam CT (CBCT) for head and neck online adaptive radiation therapy (ART), where expert-drawn contours in planning CT (pCT) can serve as prior knowledge. Due to lots of artifacts and truncations on CBCT, we propose to utilize a learning based deformable image registration method and contour propagation to get updated contours on CBCT. Our method takes CBCT and pCT as inputs, and output deformation vector field and synthetic CT (sCT) at the same time by jointly training a CycleGAN model and 5-cascaded Voxelmorph model together.The CycleGAN serves to generate sCT from CBCT, while the 5-cascaded Voxelmorph serves to warp pCT to sCTs anatommy. The segmentation results were compared to Elastix, Voxelmorph and 5-cascaded Voxelmorph on 18 structures including left brachial plexus, right brachial plexus, brainstem, oral cavity, middle pharyngeal constrictor, superior pharyngeal constrictor, inferior pharyngeal constrictor, esophagus, nodal gross tumor volume, larynx, mandible, left masseter, right masseter, left parotid gland, right parotid gland, left submandibular gland, right submandibular gland, and spinal cord. Results show that our proposed method can achieve average Dice similarity coefficients and 95% Hausdorff distance of 0.83 and 2.01mm. As compared to other methods, our method has shown better accuracy to Voxelmorph and 5-cascaded Voxelmorph, and comparable accuracy to Elastix but much higher efficiency. The proposed method can rapidly and simultaneously generate sCT with correct CT numbers and propagate contours from pCT to CBCT for online ART re-planning.

rate research

Read More

Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as CBCT, to meet the online demands of plan evaluation and adaptation. Deep learning (DL)-based automatic segmentation has gained great success in segmenting planning CT, but its applications to CBCT yielded inferior results due to the low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. The registration algorithm generates initial contours, which were used as guidance by DL model to obtain accurate final segmentations. We had two implementations the proposed framework--Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)--with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm respectively and U-Net as DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset. Compared to the baseline approaches using the registration or the DL alone, RgDL achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The DSC of Def-RgDL is 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time took by the DL model to generate final segmentations of seven OARs is less than one second in RgDL. The resulting segmentation accuracy and efficiency show the promise of applying RgDL framework for online ART.
Purpose: Organ-at-risk (OAR) delineation is a key step for cone-beam CT (CBCT) based adaptive radiotherapy planning that can be a time-consuming, labor-intensive, and subject-to-variability process. We aim to develop a fully automated approach aided by synthetic MRI for rapid and accurate CBCT multi-organ contouring in head-and-neck (HN) cancer patients. MRI has superb soft-tissue contrasts, while CBCT offers bony-structure contrasts. Using the complementary information provided by MRI and CBCT is expected to enable accurate multi-organ segmentation in HN cancer patients. In our proposed method, MR images are firstly synthesized using a pre-trained cycle-consistent generative adversarial network given CBCT. The features of CBCT and synthetic MRI are then extracted using dual pyramid networks for final delineation of organs. CBCT images and their corresponding manual contours were used as pairs to train and test the proposed model. Quantitative metrics including Dice similarity coefficient (DSC) were used to evaluate the proposed method. The proposed method was evaluated on a cohort of 65 HN cancer patients. CBCT images were collected from those patients who received proton therapy. Overall, DSC values of 0.87, 0.79/0.79, 0.89/0.89, 0.90, 0.75/0.77, 0.86, 0.66, 0.78/0.77, 0.96, 0.89/0.89, 0.832, and 0.84 for commonly used OARs for treatment planning including brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord, respectively, were achieved. In this study, we developed a synthetic MRI-aided HN CBCT auto-segmentation method based on deep learning. It provides a rapid and accurate OAR auto-delineation approach, which can be used for adaptive radiation therapy.
188 - Xuejun Gu , Bin Dong , Jing Wang 2013
In adaptive radiotherapy, deformable image registration is often conducted between the planning CT and treatment CT (or cone beam CT) to generate a deformation vector field (DVF) for dose accumulation and contour propagation. The auto propagated contours on the treatment CT may contain relatively large errors, especially in low contrast regions. A clinician inspection and editing of the propagated contours are frequently needed. The edited contours are able to meet the clinical requirement for adaptive therapy; however, the DVF is still inaccurate and inconsistent with the edited contours. The purpose of this work is to develop a contour-guided deformable image registration (CG-DIR) algorithm to improve the accuracy and consistency of the DVF for adaptive radiotherapy. Incorporation of the edited contours into the registration algorithm is realized by regularizing the objective function of the original demons algorithm with a term of intensity matching between the delineated structures set pairs. The CG-DIR algorithm is implemented on computer graphics processing units (GPUs) by following the original GPU-based demons algorithm computation framework [Gu et al, Phys Med Biol. 55(1): 207-219, 2010]. The performance of CG-DIR is evaluated on five clinical head-and-neck and one pelvic cancer patient data. It is found that compared with the original demons, CG-DIR improves the accuracy and consistency of the DVF, while retaining similar high computational efficiency.
148 - Xin Zhen , Xuejun Gu , Hao Yan 2012
Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons.
Purpose: Dual-energy CT (DECT) has been used to derive relative stopping power (RSP) map by obtaining the energy dependence of photon interactions. The DECT-derived RSP maps could potentially be compromised by image noise levels and the severity of artifacts when using physics-based mapping techniques, which would affect subsequent clinical applications. This work presents a noise-robust learning-based method to predict RSP maps from DECT for proton radiation therapy. Methods: The proposed method uses a residual attention cycle-consistent generative adversarial (CycleGAN) network. CycleGAN were used to let the DECT-to-RSP mapping be close to a one-to-one mapping by introducing an inverse RSP-to-DECT mapping. We retrospectively investigated 20 head-and-neck cancer patients with DECT scans in proton radiation therapy simulation. Ground truth RSP values were assigned by calculation based on chemical compositions, and acted as learning targets in the training process for DECT datasets, and were evaluated against results from the proposed method using a leave-one-out cross-validation strategy. Results: The predicted RSP maps showed an average normalized mean square error (NMSE) of 2.83% across the whole body volume, and average mean error (ME) less than 3% in all volumes of interest (VOIs). With additional simulated noise added in DECT datasets, the proposed method still maintained a comparable performance, while the physics-based stoichiometric method suffered degraded inaccuracy from increased noise level. The average differences in DVH metrics for clinical target volumes (CTVs) were less than 0.2 Gy for D95% and Dmax with no statistical significance. Conclusion: These results strongly indicate the high accuracy of RSP maps predicted by our machine-learning-based method and show its potential feasibility for proton treatment planning and dose calculation.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا