No Arabic abstract
Objective evaluation of new and improved methods for PET imaging requires access to images with ground truth, as can be obtained through simulation studies. However, for these studies to be clinically relevant, it is important that the simulated images are clinically realistic. In this study, we develop a stochastic and physics-based method to generate realistic oncological two-dimensional (2-D) PET images, where the ground-truth tumor properties are known. The developed method extends upon a previously proposed approach. The approach captures the observed variabilities in tumor properties from actual patient population. Further, we extend that approach to model intra-tumor heterogeneity using a lumpy object model. To quantitatively evaluate the clinical realism of the simulated images, we conducted a human-observer study. This was a two-alternative forced-choice (2AFC) study with trained readers (five PET physicians and one PET physicist). Our results showed that the readers had an average of ~ 50% accuracy in the 2AFC study. Further, the developed simulation method was able to generate wide varieties of clinically observed tumor types. These results provide evidence for the application of this method to 2-D PET imaging applications, and motivate development of this method to generate 3-D PET images.
Tumor segmentation in oncological PET is challenging, a major reason being the partial-volume effects due to the low system resolution and finite voxel size. The latter results in tissue-fraction effects, i.e. voxels contain a mixture of tissue classes. Most conventional methods perform segmentation by exclusively assigning each voxel in the image as belonging to either the tumor or normal tissue classes. Thus, these methods are inherently limited in modeling the tissue-fraction effects. To address this inherent limitation, we propose an estimation-based approach to segmentation. Specifically, we develop a Bayesian method that estimates the posterior mean of fractional volume that the tumor occupies within each image voxel. The proposed method, implemented using an encoder-decoder network, was first evaluated using clinically realistic 2-D simulation studies with known ground truth, in the context of segmenting the primary tumor in PET images of patients with lung cancer. The evaluation studies demonstrated that the method accurately estimated the tumor-fraction areas and significantly outperformed widely used conventional methods, including a U-net-based method, on the task of segmenting the tumor. In addition, the proposed method was relatively insensitive to partial-volume effects and yielded reliable tumor segmentation for different clinical-scanner configurations. The method was then evaluated using clinical images of patients with stage II and III non-small cell lung cancer from ACRIN 6668/RTOG 0235 multi-center clinical trial. Here, the results showed that the proposed method significantly outperformed all other considered methods and yielded accurate tumor segmentation on patient images with dice similarity coefficient of 0.82 (95% CI: 0.78, 0.86). Overall, this study demonstrates the efficacy of the proposed method to accurately segment tumors in PET images.
Artificial intelligence (AI) techniques for image-based segmentation have garnered much attention in recent years. Convolutional neural networks (CNNs) have shown impressive results and potential towards fully automated segmentation in medical imaging, and particularly PET imaging. To cope with the limited access to annotated data needed in supervised AI methods, given tedious and prone-to-error manual delineations, semi-supervised and unsupervised AI techniques have also been explored for segmentation of tumors or normal organs in single and bi-modality scans. This work provides a review of existing AI techniques for segmentation tasks and the evaluation criteria for translational AI-based segmentation efforts towards routine adoption in clinical workflows.
Artificial intelligence (AI)-based methods are showing promise in multiple medical-imaging applications. Thus, there is substantial interest in clinical translation of these methods, requiring in turn, that they be evaluated rigorously. In this paper, our goal is to lay out a framework for objective task-based evaluation of AI methods. We will also provide a list of tools available in the literature to conduct this evaluation. Further, we outline the important role of physicians in conducting these evaluation studies. The examples in this paper will be proposed in the context of PET with a focus on neural-network-based methods. However, the framework is also applicable to evaluate other medical-imaging modalities and other types of AI methods.
Optical tomographic cross-sectional images of biological samples were made possible by interferometric imaging techniques such as Optical Coherence Tomography (OCT). Owing to its unprecedented view of the sample, OCT has become a gold standard, namely for human retinal imaging in the clinical environment. In this Letter, we present Optical Incoherence Tomography (OIT): a completely digital method extending the possibility to generate tomographic retinal cross-sections to non-interferometric imaging systems such as en-face AO-ophthalmoscopes. We demonstrate that OIT can be applied to different imaging modalities using back-scattered and multiply-scattered light including systems without inherent optical sectioning. We show that OIT can be further used to guide focus position when the user is blind focusing, allowing precise imaging of translucent retinal structures, the vascular plexuses and the retinal pigment epithelium using respectively split detection, motion contrast, and autofluorescence techniques.
Attenuation compensation (AC) is a pre-requisite for reliable quantification and beneficial for visual interpretation tasks in single-photon emission computed tomography (SPECT). Typical AC methods require the availability of an attenuation map obtained using a transmission scan, such as a CT scan. This has several disadvantages such as increased radiation dose, higher costs, and possible misalignment between SPECT and CT scans. Also, often a CT scan is unavailable. In this context, we and others are showing that scattered photons in SPECT contain information to estimate the attenuation distribution. To exploit this observation, we propose a physics and learning-based method that uses the SPECT emission data in the photopeak and scatter windows to perform transmission-less AC in SPECT. The proposed method uses data acquired in the scatter window to reconstruct an initial estimate of the attenuation map using a physics-based approach. A convolutional neural network is then trained to segment this initial estimate into different regions. Pre-defined attenuation coefficients are assigned to these regions, yielding the reconstructed attenuation map, which is then used to reconstruct the activity map using an ordered subsets expectation maximization-based reconstruction approach. We objectively evaluated the performance of this method using a highly realistic simulation study conducted on the clinically relevant task of detecting perfusion defects in myocardial perfusion SPECT. Our results showed no statistically significant differences between the performance achieved using the proposed method and that with the true attenuation maps. Visually, the images reconstructed using the proposed method looked similar to those with the true attenuation map. Overall, these results provide evidence of the capability of the proposed method to perform transmission-less AC and motivate further evaluation.