ترغب بنشر مسار تعليمي؟ اضغط هنا

CT to Cone-beam CT Deformable Registration With Simultaneous Intensity Correction

168   0   0.0 ( 0 )
 نشر من قبل Xun Jia
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Computed tomography (CT) to cone-beam computed tomography (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called Deformation with Intensity Simultaneously Corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units (GPUs) in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons.



قيم البحث

اقرأ أيضاً

Adaptive radiotherapy (ART), especially online ART, effectively accounts for positioning errors and anatomical changes. One key component of online ART is accurately and efficiently delineating organs at risk (OARs) and targets on online images, such as CBCT, to meet the online demands of plan evaluation and adaptation. Deep learning (DL)-based automatic segmentation has gained great success in segmenting planning CT, but its applications to CBCT yielded inferior results due to the low image quality and limited available contour labels for training. To overcome these obstacles to online CBCT segmentation, we propose a registration-guided DL (RgDL) segmentation framework that integrates image registration algorithms and DL segmentation models. The registration algorithm generates initial contours, which were used as guidance by DL model to obtain accurate final segmentations. We had two implementations the proposed framework--Rig-RgDL (Rig for rigid body) and Def-RgDL (Def for deformable)--with rigid body (RB) registration or deformable image registration (DIR) as the registration algorithm respectively and U-Net as DL model architecture. The two implementations of RgDL framework were trained and evaluated on seven OARs in an institutional clinical Head and Neck (HN) dataset. Compared to the baseline approaches using the registration or the DL alone, RgDL achieved more accurate segmentation, as measured by higher mean Dice similarity coefficients (DSC) and other distance-based metrics. Rig-RgDL achieved a DSC of 84.5% on seven OARs on average, higher than RB or DL alone by 4.5% and 4.7%. The DSC of Def-RgDL is 86.5%, higher than DIR or DL alone by 2.4% and 6.7%. The inference time took by the DL model to generate final segmentations of seven OARs is less than one second in RgDL. The resulting segmentation accuracy and efficiency show the promise of applying RgDL framework for online ART.
188 - Hao Yan , Xiaoyu Wang , Wotao Yin 2012
Patient respiratory signal associated with the cone beam CT (CBCT) projections is important for lung cancer radiotherapy. In contrast to monitoring an external surrogate of respiration, such signal can be extracted directly from the CBCT projections. In this paper, we propose a novel local principle component analysis (LPCA) method to extract the respiratory signal by distinguishing the respiration motion-induced content change from the gantry rotation-induced content change in the CBCT projections. The LPCA method is evaluated by comparing with three state-of-the-art projection-based methods, namely, the Amsterdam Shroud (AS) method, the intensity analysis (IA) method, and the Fourier-transform based phase analysis (FT-p) method. The clinical CBCT projection data of eight patients, acquired under various clinical scenarios, were used to investigate the performance of each method. We found that the proposed LPCA method has demonstrated the best overall performance for cases tested and thus is a promising technique for extracting respiratory signal. We also identified the applicability of each existing method.
The purpose of this study is to develop a deep learning based method that can automatically generate segmentations on cone-beam CT (CBCT) for head and neck online adaptive radiation therapy (ART), where expert-drawn contours in planning CT (pCT) can serve as prior knowledge. Due to lots of artifacts and truncations on CBCT, we propose to utilize a learning based deformable image registration method and contour propagation to get updated contours on CBCT. Our method takes CBCT and pCT as inputs, and output deformation vector field and synthetic CT (sCT) at the same time by jointly training a CycleGAN model and 5-cascaded Voxelmorph model together.The CycleGAN serves to generate sCT from CBCT, while the 5-cascaded Voxelmorph serves to warp pCT to sCTs anatommy. The segmentation results were compared to Elastix, Voxelmorph and 5-cascaded Voxelmorph on 18 structures including left brachial plexus, right brachial plexus, brainstem, oral cavity, middle pharyngeal constrictor, superior pharyngeal constrictor, inferior pharyngeal constrictor, esophagus, nodal gross tumor volume, larynx, mandible, left masseter, right masseter, left parotid gland, right parotid gland, left submandibular gland, right submandibular gland, and spinal cord. Results show that our proposed method can achieve average Dice similarity coefficients and 95% Hausdorff distance of 0.83 and 2.01mm. As compared to other methods, our method has shown better accuracy to Voxelmorph and 5-cascaded Voxelmorph, and comparable accuracy to Elastix but much higher efficiency. The proposed method can rapidly and simultaneously generate sCT with correct CT numbers and propagate contours from pCT to CBCT for online ART re-planning.
473 - Xun Jia , Hao Yan , Laura Cervino 2012
Simulation of x-ray projection images plays an important role in cone beam CT (CBCT) related research projects. A projection image contains primary signal, scatter signal, and noise. It is computationally demanding to perform accurate and realistic c omputations for all of these components. In this work, we develop a package on GPU, called gDRR, for the accurate and efficient computations of x-ray projection images in CBCT under clinically realistic conditions. The primary signal is computed by a tri-linear ray-tracing algorithm. A Monte Carlo (MC) simulation is then performed, yielding the primary signal and the scatter signal, both with noise. A denoising process is applied to obtain a smooth scatter signal. The noise component is then obtained by combining the difference between the MC primary and the ray-tracing primary signals, and the difference between the MC simulated scatter and the denoised scatter signals. Finally, a calibration step converts the calculated noise signal into a realistic one by scaling its amplitude. For a typical CBCT projection with a poly-energetic spectrum, the calculation time for the primary signal is 1.2~2.3 sec, while the MC simulations take 28.1~95.3 sec. Computation time for all other steps is negligible. The ray-tracing primary signal matches well with the primary part of the MC simulation result. The MC simulated scatter signal using gDRR is in agreement with EGSnrc results with a relative difference of 3.8%. A noise calibration process is conducted to calibrate gDRR against a real CBCT scanner. The calculated projections are accurate and realistic, such that beam-hardening artifacts and scatter artifacts can be reproduced using the simulated projections. The noise amplitudes in the CBCT images reconstructed from the simulated projections also agree with those in the measured images at corresponding mAs levels.
In in-utero MRI, motion correction for fetal body and placenta poses a particular challenge due to the presence of local non-rigid transformations of organs caused by bending and stretching. The existing slice-to-volume registration (SVR) reconstruct ion methods are widely employed for motion correction of fetal brain that undergoes only rigid transformation. However, for reconstruction of fetal body and placenta, rigid registration cannot resolve the issue of misregistrations due to deformable motion, resulting in degradation of features in the reconstructed volume. We propose a Deformable SVR (DSVR), a novel approach for non-rigid motion correction of fetal MRI based on a hierarchical deformable SVR scheme to allow high resolution reconstruction of the fetal body and placenta. Additionally, a robust scheme for structure-based rejection of outliers minimises the impact of registration errors. The improved performance of DSVR in comparison to SVR and patch-to-volume registration (PVR) methods is quantitatively demonstrated in simulated experiments and 20 fetal MRI datasets from 28-31 weeks gestational age (GA) range with varying degree of motion corruption. In addition, we present qualitative evaluation of 100 fetal body cases from 20-34 weeks GA range.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا