No Arabic abstract
As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect. In this article, we investigate the possibility of denoising low dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets. Two cycle GANs were trained: 1) from paired data, by simulating low dose CTs (i.e., introducing noise) from high dose CTs; and 2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice-paired training strategy was introduced. The trained GANs were applied to three scenarios: 1) improving radiomics reproducibility in simulated low dose CT images and 2) same-day repeat low dose CTs (RIDER dataset) and 3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder-decoder network (EDN) trained on simulated paired data.The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 to 0.93 on simulated noise CT and from 0.89 to 0.92 on RIDER dataset, as well improving the AUC of survival prediction from 0.52 to 0.59. The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 and the AUC of survival prediction to 0.58. The results show that cycle GANs trained on both simulated and real data can improve radiomics reproducibility and performance in low dose CT and achieve similar results compared to CGANs and EDNs.
As an analytic pipeline for quantitative imaging feature extraction and analysis, radiomics has grown rapidly in the past a few years. Recent studies in radiomics aim to investigate the relationship between tumors imaging features and clinical outcomes. Open source radiomics feature banks enable the extraction and analysis of thousands of predefined features. On the other hand, recent advances in deep learning have shown significant potential in the quantitative medical imaging field, raising the research question of whether predefined radiomics features have predictive information in addition to deep learning features. In this study, we propose a feature fusion method and investigate whether a combined feature bank of deep learning and predefined radiomics features can improve the prognostics performance. CT images from resectable Pancreatic Adenocarcinoma (PDAC) patients were used to compare the prognosis performance of common feature reduction and fusion methods and the proposed risk-score based feature fusion method for overall survival. It was shown that the proposed feature fusion method significantly improves the prognosis performance for overall survival in resectable PDAC cohorts, elevating the area under ROC curve by 51% compared to predefined radiomics features alone, by 16% compared to deep learning features alone, and by 32% compared to existing feature fusion and reduction methods for a combination of deep learning and predefined radiomics features.
Low-dose computed tomography (LDCT) scans, which can effectively alleviate the radiation problem, will degrade the imaging quality. In this paper, we propose a novel LDCT reconstruction network that unrolls the iterative scheme and performs in both image and manifold spaces. Because patch manifolds of medical images have low-dimensional structures, we can build graphs from the manifolds. Then, we simultaneously leverage the spatial convolution to extract the local pixel-level features from the images and incorporate the graph convolution to analyze the nonlocal topological features in manifold space. The experiments show that our proposed method outperforms both the quantitative and qualitative aspects of state-of-the-art methods. In addition, aided by a projection loss component, our proposed method also demonstrates superior performance for semi-supervised learning. The network can remove most noise while maintaining the details of only 10% (40 slices) of the training data labeled.
Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of X-ray radiation exposure attract more and more attention. To lower the X-ray radiation, low-dose CT is often used in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a training method that trained denoising neural networks without any paired clean data. we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a singe 3D thin-layer low-dose CT scanning, simultaneously In other words, with some latent assumptions, we proposed an unsupervised loss function with the integration of the similarity between adjacent CT slices in 3D thin-layer lowdose CT to train the denoising neural network in an unsupervised manner. For 3D thin-slice CT scanning, the proposed virtual supervised loss function was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Further experiments on Mayo LDCT dataset and a realistic pig head were carried out and demonstrated superior performance over existing unsupervised methods.
Radiomics is an active area of research focusing on high throughput feature extraction from medical images with a wide array of applications in clinical practice, such as clinical decision support in oncology. However, noise in low dose computed tomography (CT) scans can impair the accurate extraction of radiomic features. In this article, we investigate the possibility of using deep learning generative models to improve the performance of radiomics from low dose CTs. We used two datasets of low dose CT scans -NSCLC Radiogenomics and LIDC-IDRI - as test datasets for two tasks - pre-treatment survival prediction and lung cancer diagnosis. We used encoder-decoder networks and conditional generative adversarial networks (CGANs) trained in a previous study as generative models to transform low dose CT images into full dose CT images. Radiomic features extracted from the original and improved CT scans were used to build two classifiers - a support vector machine (SVM) and a deep attention based multiple instance learning model - for survival prediction and lung cancer diagnosis respectively. Finally, we compared the performance of the models derived from the original and improved CT scans. Encoder-decoder networks and CGANs improved the area under the curve (AUC) of survival prediction from 0.52 to 0.57 (p-value<0.01). On the other hand, Encoder-decoder network and CGAN can improve the AUC of lung cancer diagnosis from 0.84 to 0.88 and 0.89 respectively (p-value<0.01). Moreover, there are no statistically significant differences in improving AUC by using encoder-decoder network and CGAN (p-value=0.34) when networks trained at 75 and 100 epochs. Generative models can improve the performance of low dose CT-based radiomics in different tasks. Hence, denoising using generative models seems to be a necessary pre-processing step for calculating radiomic features from low dose CTs.
Isoprene is one of the most abundant endogenous volatile organic compounds (VOCs) contained in human breath and is considered to be a potentially useful biomarker for diagnostic and monitoring purposes. However, neither the exact biochemical origin of isoprene nor its physiological role are understood in sufficient depth, thus hindering the validation of breath isoprene tests in clinical routine. Exhaled isoprene concentrations are reported to change under different clinical and physiological conditions, especially in response to enhanced cardiovascular and respiratory activity. Investigating isoprene exhalation kinetics under dynamical exercise helps to gather the relevant experimental information for understanding the gas exchange phenomena associated with this important VOC. A first model for isoprene in exhaled breath has been developed by our research group. In the present paper, we aim at giving a concise overview of this model and describe its role in providing supportive evidence for a peripheral (extrahepatic) source of isoprene. In this sense, the results presented here may enable a new perspective on the biochemical processes governing isoprene formation in the human body.