Do you want to publish a course? Click here

An estimation-based approach to tumor segmentation in oncological PET

73   0   0.0 ( 0 )
 Added by Abhinav K. Jha
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects due to the low system resolution and finite voxel size. The latter results in tissue-fraction effects, i.e. voxels contain a mixture of tissue classes. Most conventional methods perform segmentation by exclusively assigning each voxel in the image as belonging to either the tumor or normal tissue classes. Thus, these methods are inherently limited in modeling the tissue-fraction effects. To address this inherent limitation, we propose an estimation-based approach to segmentation. Specifically, we develop a Bayesian method that estimates the posterior mean of fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using an encoder-decoder network, was first evaluated using clinically realistic 2-D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to partial-volume effects and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage II and III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with dice similarity coefficient of 0.82 (95% CI: 0.78, 0.86). Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.



rate research

Read More

Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks (CNNs) have shown impressive results and potential towards fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single and bi-modality scans. This work provides a review of existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts towards routine adoption in clinical workflows.
Objective evaluation of new and improved methods for PET imaging requires access to images with ground truth, as can be obtained through simulation studies. However, for these studies to be clinically relevant, it is important that the simulated images are clinically realistic. In this study, we develop a stochastic and physics-based method to generate realistic oncological two-dimensional (2-D) PET images, where the ground-truth tumor properties are known. The developed method extends upon a previously proposed approach. The approach captures the observed variabilities in tumor properties from actual patient population. Further, we extend that approach to model intra-tumor heterogeneity using a lumpy object model. To quantitatively evaluate the clinical realism of the simulated images, we conducted a human-observer study. This was a two-alternative forced-choice (2AFC) study with trained readers (five PET physicians and one PET physicist). Our results showed that the readers had an average of ~ 50% accuracy in the 2AFC study. Further, the developed simulation method was able to generate wide varieties of clinically observed tumor types. These results provide evidence for the application of this method to 2-D PET imaging applications, and motivate development of this method to generate 3-D PET images.
Quantitative measures of uptake in caudate, putamen, and globus pallidus in dopamine transporter (DaT) brain SPECT have potential as biomarkers for the severity of Parkinson disease. Reliable quantification of uptake requires accurate segmentation of these regions. However, segmentation is challenging in DaT SPECT due to partial-volume effects, system noise, physiological variability, and the small size of these regions. To address these challenges, we propose an estimation-based approach to segmentation. This approach estimates the posterior mean of the fractional volume occupied by caudate, putamen, and globus pallidus within each voxel of a 3D SPECT image. The estimate is obtained by minimizing a cost function based on the binary cross-entropy loss between the true and estimated fractional volumes over a population of SPECT images, where the distribution of the true fractional volumes is obtained from magnetic resonance images from clinical populations. The proposed method accounts for both the sources of partial-volume effects in SPECT, namely the limited system resolution and tissue-fraction effects. The method was implemented using an encoder-decoder network and evaluated using realistic clinically guided SPECT simulation studies, where the ground-truth fractional volumes were known. The method significantly outperformed all other considered segmentation methods and yielded accurate segmentation with dice similarity coefficients of ~ 0.80 for all regions. The method was relatively insensitive to changes in voxel size. Further, the method was relatively robust up to +/- 10 degrees of patient head tilt along transaxial, sagittal, and coronal planes. Overall, the results demonstrate the efficacy of the proposed method to yield accurate fully automated segmentation of caudate, putamen, and globus pallidus in 3D DaT-SPECT images.
Precise quantitative delineation of tumor hypoxia is essential in radiation therapy treatment planning to improve the treatment efficacy by targeting hypoxic sub-volumes. We developed a combined imaging system of positron emission tomography (PET) and electron para-magnetic resonance imaging (EPRI) of molecular oxygen to investigate the accuracy of PET imaging in assessing tumor hypoxia. The PET/EPRI combined imaging system aims to use EPRI to precisely measure the oxygen partial pressure in tissues. This will evaluate the validity of PET hypoxic tumor imaging by (near) simultaneously acquired EPRI as ground truth. The combined imaging system was constructed by integrating a small animal PET scanner (inner ring diameter 62 mm and axial field of view 25.6 mm) and an EPRI subsystem (field strength 25 mT and resonant frequency 700 MHz). The compatibility between the PET and EPRI subsystems were tested with both phantom and animal imaging. Hypoxic imaging on a tumor mouse model using $^{18}$F-fluoromisonidazole radio-tracer was conducted with the developed PET/EPRI system. We report the development and initial imaging results obtained from the PET/EPRI combined imaging system.
142 - Luca L. Weishaupt 2021
$bf{Purpose:}$ The goal of this study was (i) to use artificial intelligence to automate the traditionally labor-intensive process of manual segmentation of tumor regions in pathology slides performed by a pathologist and (ii) to validate the use of a well-known and readily available deep learning architecture. Automation will reduce the human error involved in manual delineation, increase efficiency, and result in accurate and reproducible segmentation. This advancement will alleviate the bottleneck in the workflow in clinical and research applications due to a lack of pathologist time. Our application is patient-specific microdosimetry and radiobiological modeling, which builds on the contoured pathology slides. $bf{Methods:}$ A U-Net architecture was used to segment tumor regions in pathology core biopsies of lung tissue with adenocarcinoma stained using hematoxylin and eosin. A pathologist manually contoured the tumor regions in 56 images with binary masks for training. Overlapping patch extraction with various patch sizes and image downsampling were investigated individually. Data augmentation and 8-fold cross-validation were used. $bf{Results:}$ The U-Net achieved accuracy of 0.91$pm$0.06, specificity of 0.90$pm$0.08, sensitivity of 0.92$pm$0.07, and precision of 0.8$pm$0.1. The F1/DICE score was 0.85$pm$0.07, with a segmentation time of 3.24$pm$0.03 seconds per image, achieving a 370$pm$3 times increased efficiency over manual segmentation. In some cases, the U-Net correctly delineated the tumors stroma from its epithelial component in regions that were classified as tumor by the pathologist. $bf{Conclusion:}$ The U-Net architecture can segment images with a level of efficiency and accuracy that makes it suitable for tumor segmentation of histopathological images in fields such as radiotherapy dosimetry, specifically in the subfields of microdosimetry.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا