No Arabic abstract
Purpose: Many useful image quality metrics for evaluating linear image reconstruction techniques do not apply to or are difficult to interpret for non-linear image reconstruction. The vast majority of metrics employed for evaluating non-linear image reconstruction are based on some form of global image fidelity, such as image root mean square error (RMSE). Use of such metrics can lead to over-regularization in the sense that they can favor removal of subtle details in the image. To address this shortcoming, we develop an image quality metric based on signal detection that serves as a surrogate to the qualitative loss of fine image details. Methods: The metric is demonstrated in the context of a breast CT simulation, where different equal-dose configurations are considered. The configurations differ in the number of projections acquired. Image reconstruction is performed with a non-linear algorithm based on total variation constrained least-squares (TV-LSQ). The images are evaluated visually, with image RMSE, and with the proposed signal-detection based metric. The latter uses a small signal, and computes detectability in the sinogram and in the reconstructed image. Loss of signal detectability through the image reconstruction process is taken as a quantitative measure of loss of fine details in the image. Results: Loss of signal detectability is seen to correlate well with the blocky or patchy appearance due to over-regularization with TV-LSQ, and this trend runs counter to the image RMSE metric, which tends to favor the over-regularized images. Conclusions: The proposed signal detection based metric provides an image quality assessment that is complimentary to that of image RMSE. Using the two metrics in concert may yield a useful prescription for determining CT algorithm and configuration parameters when non-linear image reconstruction is used.
Low counts reconstruction remains a challenge for Positron Emission Tomography (PET) even with the recent progresses in time-of-flight (TOF) resolution. In that setting, the bias between the acquired histogram, composed of low values or zeros, and the expected histogram, obtained from the forward projector, is propagated to the image, resulting in a biased reconstruction. This could be exacerbated with finer resolution of the TOF information, which further sparsify the acquired histogram. We propose a new approach to circumvent this limitation of the classical reconstruction model. It consists of extending the parametrization of the reconstruction scheme to also explicitly include the projection domain. This parametrization has greater degrees of freedom than the log-likelihood model, which can not be harnessed in classical circumstances. We hypothesize that with ultra-fast TOF this new approach would not only be viable for low counts reconstruction but also more adequate than the classical reconstruction model. An implementation of this approach is compared to the log-likelihood model by using two-dimensional simulations of a hot spots phantom. The proposed model achieves similar contrast recovery coefficients as MLEM except for the smallest structures where the low counts nature of the simulations makes it difficult to draw conclusions. Also, this new model seems to converge toward a less noisy solution than the MLEM. These results suggest that this new approach has potential for low counts reconstruction with ultra-fast TOF.
The work seeks to develop an algorithm for image reconstruction by directly inverting the non-linear data model in spectral CT. Using the non-linear data model, we formulate the image-reconstruction problem as a non-convex optimization program, and develop a non-convex primal-dual (NCPD) algorithm to solve the program. We devise multiple convergence conditions and perform verification studies numerically to demonstrate that the NCPD algorithm can solve the non-convex optimization program and under appropriate data condition, can invert the non-linear data model. Using the NCPD algorithm, we then reconstruct monochromatic images from simulated and real data of numerical and physical phantoms acquired with a standard, full-scan dual-energy configuration. The result of the reconstruction studies shows that the NCPD algorithm can correct accurately for the non-linear beam-hardening effect. Furthermore, we apply the NCPD algorithm to simulated and real data of the numerical and physical phantoms collected with non-standard, short-scan dual-energy configurations, and obtain monochromatic images comparable to those of the standard, full-scan study, thus revealing the potential of the NCPD algorithm for enabling non-standard scanning configurations in spectral CT, where the existing indirect methods are limited.
Multi-contrast images are commonly acquired together to maximize complementary diagnostic information, albeit at the expense of longer scan times. A time-efficient strategy to acquire high-quality multi-contrast images is to accelerate individual sequences and then reconstruct undersampled data with joint regularization terms that leverage common information across contrasts. However, these terms can cause features that are unique to a subset of contrasts to leak into the other contrasts. Such leakage-of-features may appear as artificial tissues, thereby misleading diagnosis. The goal of this study is to develop a compressive sensing method for multi-channel multi-contrast magnetic resonance imaging (MRI) that optimally utilizes shared information while preventing feature leakage. Joint regularization terms group sparsity and colour total variation are used to exploit common features across images while individual sparsity and total variation are also used to prevent leakage of distinct features across contrasts. The multi-channel multi-contrast reconstruction problem is solved via a fast algorithm based on Alternating Direction Method of Multipliers. The proposed method is compared against using only individual and only joint regularization terms in reconstruction. Comparisons were performed on single-channel simulated and multi-channel in-vivo datasets in terms of reconstruction quality and neuroradiologist reader scores. The proposed method demonstrates rapid convergence and improved image quality for both simulated and in-vivo datasets. Furthermore, while reconstructions that solely use joint regularization terms are prone to leakage-of-features, the proposed method reliably avoids leakage via simultaneous use of joint and individual terms, thereby holding great promise for clinical use.
Since the advent of deep convolutional neural networks (DNNs), computer vision has seen an extremely rapid progress that has led to huge advances in medical imaging. This article does not aim to cover all aspects of the field but focuses on a particular topic, image-to-image translation. Although the topic may not sound familiar, it turns out that many seemingly irrelevant applications can be understood as instances of image-to-image translation. Such applications include (1) noise reduction, (2) super-resolution, (3) image synthesis, and (4) reconstruction. The same underlying principles and algorithms work for various tasks. Our aim is to introduce some of the key ideas on this topic from a uniform point of view. We introduce core ideas and jargon that are specific to image processing by use of DNNs. Having an intuitive grasp of the core ideas of and a knowledge of technical terms would be of great help to the reader for understanding the existing and future applications. Most of the recent applications which build on image-to-image translation are based on one of two fundamental architectures, called pix2pix and CycleGAN, depending on whether the available training data are paired or unpaired. We provide computer codes which implement these two architectures with various enhancements. Our codes are available online with use of the very permissive MIT license. We provide a hands-on tutorial for training a model for denoising based on our codes. We hope that this article, together with the codes, will provide both an overview and the details of the key algorithms, and that it will serve as a basis for the development of new applications.
Purpose: To develop a single-shot multi-slice T1 mapping method by combing simultaneous multi-slice (SMS) excitations, single-shot inversion-recovery (IR) radial fast low-angle shot (FLASH) and a nonlinear model-based reconstruction method. Methods: SMS excitations are combined with a single-shot IR radial FLASH sequence for data acquisition. A previously developed single-slice calibrationless model-based reconstruction is extended to SMS, formulating the estimation of parameter maps and coil sensitivities from all slices as a single nonlinear inverse problem. Joint-sparsity constraints are further applied to the parameter maps to improve T1 precision. Validations of the proposed method are performed for a phantom and for the human brain and liver in six healthy adult subjects. Results: Phantom results confirm good T1 accuracy and precision of the simultaneously acquired multi-slice T1 maps in comparison to single-slice references. In-vivo human brain studies demonstrate the better performance of SMS acquisitions compared to the conventional spoke-interleaved multi-slice acquisition using model-based reconstruction. Apart from good accuracy and precision, the results of six healthy subjects in both brain and abdominal studies confirm good repeatability between scan and re-scans. The proposed method can simultaneously acquire T1 maps for five slices of a human brain ($0.75 times 0.75 times 5$ mm$^3$) or three slices of the abdomen ($1.25 times 1.25 times 6$ mm$^3$) within four seconds. Conclusion: The IR SMS radial FLASH acquisition together with a non-linear model-based reconstruction enable rapid high-resolution multi-slice T1 mapping with good accuracy, precision, and repeatability.