No Arabic abstract
Radiomics is an active area of research focusing on high throughput feature extraction from medical images with a wide array of applications in clinical practice, such as clinical decision support in oncology. However, noise in low dose computed tomography (CT) scans can impair the accurate extraction of radiomic features. In this article, we investigate the possibility of using deep learning generative models to improve the performance of radiomics from low dose CTs. We used two datasets of low dose CT scans -NSCLC Radiogenomics and LIDC-IDRI - as test datasets for two tasks - pre-treatment survival prediction and lung cancer diagnosis. We used encoder-decoder networks and conditional generative adversarial networks (CGANs) trained in a previous study as generative models to transform low dose CT images into full dose CT images. Radiomic features extracted from the original and improved CT scans were used to build two classifiers - a support vector machine (SVM) and a deep attention based multiple instance learning model - for survival prediction and lung cancer diagnosis respectively. Finally, we compared the performance of the models derived from the original and improved CT scans. Encoder-decoder networks and CGANs improved the area under the curve (AUC) of survival prediction from 0.52 to 0.57 (p-value<0.01). On the other hand, Encoder-decoder network and CGAN can improve the AUC of lung cancer diagnosis from 0.84 to 0.88 and 0.89 respectively (p-value<0.01). Moreover, there are no statistically significant differences in improving AUC by using encoder-decoder network and CGAN (p-value=0.34) when networks trained at 75 and 100 epochs. Generative models can improve the performance of low dose CT-based radiomics in different tasks. Hence, denoising using generative models seems to be a necessary pre-processing step for calculating radiomic features from low dose CTs.
As an analytic pipeline for quantitative imaging feature extraction and analysis, radiomics has grown rapidly in the past a few years. Recent studies in radiomics aim to investigate the relationship between tumors imaging features and clinical outcomes. Open source radiomics feature banks enable the extraction and analysis of thousands of predefined features. On the other hand, recent advances in deep learning have shown significant potential in the quantitative medical imaging field, raising the research question of whether predefined radiomics features have predictive information in addition to deep learning features. In this study, we propose a feature fusion method and investigate whether a combined feature bank of deep learning and predefined radiomics features can improve the prognostics performance. CT images from resectable Pancreatic Adenocarcinoma (PDAC) patients were used to compare the prognosis performance of common feature reduction and fusion methods and the proposed risk-score based feature fusion method for overall survival. It was shown that the proposed feature fusion method significantly improves the prognosis performance for overall survival in resectable PDAC cohorts, elevating the area under ROC curve by 51% compared to predefined radiomics features alone, by 16% compared to deep learning features alone, and by 32% compared to existing feature fusion and reduction methods for a combination of deep learning and predefined radiomics features.
Tracking objects of interest in a video is one of the most popular and widely applicable problems in computer vision. However, with the years, a Cambrian explosion of use cases and benchmarks has fragmented the problem in a multitude of different experimental setups. As a consequence, the literature has fragmented too, and now the novel approaches proposed by the community are usually specialised to fit only one specific setup. To understand to what extent this specialisation is actually necessary, in this work we present UniTrack, a unified tracking solution to address five different tasks within the same framework. UniTrack consists of a single and task-agnostic appearance model, which can be learned in a supervised or self-supervised fashion, and multiple heads to address individual tasks and that do not require training. We show how most tracking tasks can be solved within this framework, and that the same appearance model can be used to obtain performance that is competitive against specialised methods for all the five tasks considered. The framework also allows us to analyse appearance models obtained with the most recent self-supervised methods, thus significantly extending their evaluation and comparison to a larger variety of important problems. Code available at https://github.com/Zhongdao/UniTrack.
Cryo-electron microscopy (cryo-EM) is capable of producing reconstructed 3D images of biomolecules at near-atomic resolution. As such, it represents one of the most promising imaging techniques in structural biology. However, raw cryo-EM images are only highly corrupted - noisy and band-pass filtered - 2D projections of the target 3D biomolecules. Reconstructing the 3D molecular shape starts with the removal of image outliers, the estimation of the orientation of the biomolecule that has produced the given 2D image, and the estimation of camera parameters to correct for intensity defects. Current techniques performing these tasks are often computationally expensive, while the dataset sizes keep growing. There is a need for next-generation algorithms that preserve accuracy while improving speed and scalability. In this paper, we combine variational autoencoders (VAEs) and generative adversarial networks (GANs) to learn a low-dimensional latent representation of cryo-EM images. We perform an exploratory analysis of the obtained latent space, that is shown to have a structure of orbits, in the sense of Lie group theory, consistent with the acquisition procedure of cryo-EM images. This analysis leads us to design an estimation method for orientation and camera parameters of single-particle cryo-EM images, together with an outliers detection procedure. As such, it opens the door to geometric approaches for unsupervised estimations of orientations and camera parameters, making possible fast cryo-EM biomolecule reconstruction.
As a means to extract biomarkers from medical imaging, radiomics has attracted increased attention from researchers. However, reproducibility and performance of radiomics in low dose CT scans are still poor, mostly due to noise. Deep learning generative models can be used to denoise these images and in turn improve radiomics reproducibility and performance. However, most generative models are trained on paired data, which can be difficult or impossible to collect. In this article, we investigate the possibility of denoising low dose CTs using cycle generative adversarial networks (GANs) to improve radiomics reproducibility and performance based on unpaired datasets. Two cycle GANs were trained: 1) from paired data, by simulating low dose CTs (i.e., introducing noise) from high dose CTs; and 2) from unpaired real low dose CTs. To accelerate convergence, during GAN training, a slice-paired training strategy was introduced. The trained GANs were applied to three scenarios: 1) improving radiomics reproducibility in simulated low dose CT images and 2) same-day repeat low dose CTs (RIDER dataset) and 3) improving radiomics performance in survival prediction. Cycle GAN results were compared with a conditional GAN (CGAN) and an encoder-decoder network (EDN) trained on simulated paired data.The cycle GAN trained on simulated data improved concordance correlation coefficients (CCC) of radiomic features from 0.87 to 0.93 on simulated noise CT and from 0.89 to 0.92 on RIDER dataset, as well improving the AUC of survival prediction from 0.52 to 0.59. The cycle GAN trained on real data increased the CCCs of features in RIDER to 0.95 and the AUC of survival prediction to 0.58. The results show that cycle GANs trained on both simulated and real data can improve radiomics reproducibility and performance in low dose CT and achieve similar results compared to CGANs and EDNs.
Quantification of myocardial perfusion has the potential to improve detection of regional and global flow reduction. Significant effort has been made to automate the workflow, where one essential step is the arterial input function (AIF) extraction. Since failure here invalidates quantification, high accuracy is required. For this purpose, this study presents a robust AIF detection method using the convolutional neural net (CNN) model. CNN models were trained by assembling 25,027 scans (N=12,984 patients) from three hospitals, seven scanners. A test set of 5,721 scans (N=2,805 patients) evaluated model performance. The 2D+T AIF time series was inputted into CNN. Two variations were investigated: a) Two Classes (2CS) for background and foreground (LV mask); b) Three Classes (3CS) for background, foreground LV and RV. Final model was deployed on MR scanners via the Gadgetron InlineAI. Model loading time on MR scanner was ~340ms and applying it took ~180ms. The 3CS model successfully detect LV for 99.98% of all test cases (1 failed out of 5,721 cases). The mean Dice ratio for 3CS was 0.87+/-0.08 with 92.0% of all test cases having Dice ratio >0.75, while the 2CS model gave lower Dice of 0.82+/-0.22 (P<1e-5). Extracted AIF signals using CNN were further compared to manual ground-truth for foot-time, peak-time, first-pass duration, peak value and area-under-curve. No significant differences were found for all features (P>0.2). This study proposed, validated, and deployed a robust CNN solution to detect the LV for the extraction of the AIF signal used in fully automated perfusion flow mapping. A very large data cohort was assembled and resulting models were deployed to MR scanners for fully inline AI in clinical hospitals.