ترغب بنشر مسار تعليمي؟ اضغط هنا

A Method of Rapid Quantification of Patient-Specific Organ Dose for CT Using Coupled Deep-Learning based Multi-Organ Segmentation and GPU-accelerated Monte Carlo Dose Computing

199   0   0.0 ( 0 )
 نشر من قبل Zhao Peng
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Purpose: This paper describes a new method to apply deep-learning algorithms for automatic segmentation of radiosensitive organs from 3D tomographic CT images before computing organ doses using a GPU-based Monte Carlo code. Methods: A deep convolutional neural network (CNN) for organ segmentation is trained to automatically delineate radiosensitive organs from CT. With a GPU-based Monte Carlo dose engine (ARCHER) to derive CT dose of a phantom made from a subjects CT scan, we are then able to compute the patient-specific CT dose for each of the segmented organs. The developed tool is validated by using Relative Dose Error (RDE) against the organ doses calculated by ARCHER with manual segmentation performed by radiologists. The dose computation results are also compared against organ doses from population-average phantoms to demonstrate the improvement achieved by using the developed tool. In this study, two datasets were used: The Lung CT Segmentation Challenge 2017 (LCTSC) dataset, which contains 60 thoracic CT scan patients each with 5 segmented organs, and the Pancreas-CT (PCT) dataset, which contains 43 abdominal CT scan patients each with 8 segmented organs. Five-fold cross-validation of the new method is performed on both datasets. Results: Comparing with the traditional organ dose evaluation method that based on population-average phantom, our proposed method achieved the smaller RDE range on all organs with -4.3%~1.5% vs -31.5%~33.9% (lung), -7.0%~2.3% vs -15.2%~125.1% (heart), -18.8%~40.2% vs -10.3%~124.1% (esophagus) in the LCTSC dataset and -5.6%~1.6% vs -20.3%~57.4% (spleen), -4.5%~4.6% vs -19.5%~61.0% (pancreas), -2.3%~4.4% vs -37.8%~75.8% (left kidney), -14.9%~5.4% vs -39.9% ~14.6% (gall bladder), -0.9%~1.6% vs -30.1%~72.5% (liver), and -23.0%~11.1% vs -52.5%~-1.3% (stomach) in the PCT dataset.

قيم البحث

اقرأ أيضاً

Cone beam CT (CBCT) has been widely used for patient setup in image guided radiation therapy (IGRT). Radiation dose from CBCT scans has become a clinical concern. The purposes of this study are 1) to commission a GPU-based Monte Carlo (MC) dose calcu lation package gCTD for Varian On-Board Imaging (OBI) system and test the calculation accuracy, and 2) to quantitatively evaluate CBCT dose from the OBI system in typical IGRT scan protocols. We first conducted dose measurements in a water phantom. X-ray source model parameters used in gCTD are obtained through a commissioning process. gCTD accuracy is demonstrated by comparing calculations with measurements in water and in CTDI phantoms. 25 brain cancer patients are used to study dose in a standard-dose head protocol, and 25 prostate cancer patients are used to study dose in pelvis protocol and pelvis spotlight protocol. Mean dose to each organ is calculated. Mean dose to 2% voxels that have the highest dose is also computed to quantify the maximum dose. It is found that the mean dose value to an organ varies largely among patients. Moreover, dose distribution is highly non-homogeneous inside an organ. The maximum dose is found to be 1~3 times higher than the mean dose depending on the organ, and is up to 8 times higher for the entire body due to the very high dose region in bony structures. High computational efficiency has also been observed in our studies, such that MC dose calculation time is less than 5 min for a typical case.
Monte Carlo (MC) method has been recognized the most accurate dose calculation method for radiotherapy. However, its extremely long computation time impedes clinical applications. Recently, a lot of efforts have been made to realize fast MC dose calc ulation on GPUs. Nonetheless, most of the GPU-based MC dose engines were developed in NVidia CUDA environment. This limits the code portability to other platforms, hindering the introduction of GPU-based MC simulations to clinical practice. The objective of this paper is to develop a fast cross-platform MC dose engine oclMC using OpenCL environment for external beam photon and electron radiotherapy in MeV energy range. Coupled photon-electron MC simulation was implemented with analogue simulations for photon transports and a Class II condensed history scheme for electron transports. To test the accuracy and efficiency of our dose engine oclMC, we compared dose calculation results of oclMC and gDPM, our previously developed GPU-based MC code, for a 15 MeV electron beam and a 6 MV photon beam on a homogenous water phantom, one slab phantom and one half-slab phantom. Satisfactory agreement was observed in all the cases. The average dose differences within 10% isodose line of the maximum dose were 0.48-0.53% for the electron beam cases and 0.15-0.17% for the photon beam cases. In terms of efficiency, our dose engine oclMC was 6-17% slower than gDPM when running both codes on the same NVidia TITAN card due to both different physics particle transport models and different computational environments between CUDA and OpenCL. The cross-platform portability was also validated by successfully running our new dose engine on a set of different compute devices including an Nvidia GPU card, two AMD GPU cards and an Intel CPU card using one or four cores. Computational efficiency among these platforms was compared.
106 - Zhen Tian , Xun Jia , Kehong Yuan 2010
High radiation dose in CT scans increases a lifetime risk of cancer and has become a major clinical concern. Recently, iterative reconstruction algorithms with Total Variation (TV) regularization have been developed to reconstruct CT images from high ly undersampled data acquired at low mAs levels in order to reduce the imaging dose. Nonetheless, TV regularization may lead to over-smoothed images and lost edge information. To solve this problem, in this work we develop an iterative CT reconstruction algorithm with edge-preserving TV regularization to reconstruct CT images from highly undersampled data obtained at low mAs levels. The CT image is reconstructed by minimizing an energy consisting of an edge-preserving TV norm and a data fidelity term posed by the x-ray projections. The edge-preserving TV term is proposed to preferentially perform smoothing only on non-edge part of the image in order to avoid over-smoothing, which is realized by introducing a penalty weight to the original total variation norm. Our iterative algorithm is implemented on GPU to improve its speed. We test our reconstruction algorithm on a digital NCAT phantom, a physical chest phantom, and a Catphan phantom. Reconstruction results from a conventional FBP algorithm and a TV regularization method without edge preserving penalty are also presented for comparison purpose. The experimental results illustrate that both TV-based algorithm and our edge-preserving TV algorithm outperform the conventional FBP algorithm in suppressing the streaking artifacts and image noise under the low dose context. Our edge-preserving algorithm is superior to the TV-based algorithm in that it can preserve more information of fine structures and therefore maintain acceptable spatial resolution.
The next great leap toward improving treatment of cancer with radiation will require the combined use of online adaptive and magnetic resonance guided radiation therapy techniques with automatic X-ray beam orientation selection. Unfortunately, by uni ting these advancements, we are met with a substantial expansion in the required dose information and consequential increase to the overall computational time imposed during radiation treatment planning, which cannot be handled by existing techniques for accelerating Monte Carlo dose calculation. We propose a deep convolutional neural network approach that unlocks new levels of acceleration and accuracy with regards to post-processed Monte Carlo dose results by relying on data-driven learned representations of low-level beamlet dose distributions instead of more limited filter-based denoising techniques that only utilize the information in a single dose input. Our method uses parallel UNET branches acting on three input channels before mixing latent understanding to produce noise-free dose predictions. Our model achieves a normalized mean absolute error of only 0.106% compared with the ground truth dose contrasting the 25.7% error of the under sampled MC dose fed into the network at prediction time. Our models per-beamlet prediction time is ~220ms, including Monte Carlo simulation and network prediction, with substantial additional acceleration expected from batched processing and combination with existing Monte Carlo acceleration techniques. Our method shows promise toward enabling clinical practice of advanced treatment technologies.
We recently built an analytical source model for GPU-based MC dose engine. In this paper, we present a sampling strategy to efficiently utilize this source model in GPU-based dose calculation. Our source model was based on a concept of phase-space-ri ng (PSR). This ring structure makes it effective to account for beam rotational symmetry, but not suitable for dose calculations due to rectangular jaw settings. Hence, we first convert PSR source model to its phase-space let (PSL) representation. Then in dose calculation, different types of sub-sources were separately sampled. Source sampling and particle transport were iterated. So that the particles being sampled and transported simultaneously are of same type and close in energy to alleviate GPU thread divergence. We also present an automatic commissioning approach to adjust the model for a good representation of a clinical linear accelerator . Weighting factors were introduced to adjust relative weights of PSRs, determined by solving a quadratic minimization problem with a non-negativity constraint. We tested the efficiency gain of our model over a previous source model using PSL files. The efficiency was improved by 1.70 ~ 4.41, due to the avoidance of long data reading and transferring. The commissioning problem can be solved in ~20 sec. Its efficacy was tested by comparing the doses computed using the commissioned model and the uncommissioned one, with measurements in different open fields in a water phantom under a clinical Varian Truebeam 6MV beam. For the depth dose curves, the average distance-to-agreement was improved from 0.04~0.28 cm to 0.04~0.12 cm for build-up region and the root-mean-square (RMS) dose difference after build-up region was reduced from 0.32%~0.67% to 0.21%~0.48%. For lateral dose profiles, RMS difference was reduced from 0.31%~2.0% to 0.06%~0.78% at inner beam and from 0.20%~1.25% to 0.10%~0.51% at outer beam.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا