No Arabic abstract
Computed tomography (CT) has played a vital role in medical diagnosis, assessment, and therapy planning, etc. In clinical practice, concerns about the increase of X-ray radiation exposure attract more and more attention. To lower the X-ray radiation, low-dose CT is often used in certain scenarios, while it will induce the degradation of CT image quality. In this paper, we proposed a training method that trained denoising neural networks without any paired clean data. we trained the denoising neural network to map one noise LDCT image to its two adjacent LDCT images in a singe 3D thin-layer low-dose CT scanning, simultaneously In other words, with some latent assumptions, we proposed an unsupervised loss function with the integration of the similarity between adjacent CT slices in 3D thin-layer lowdose CT to train the denoising neural network in an unsupervised manner. For 3D thin-slice CT scanning, the proposed virtual supervised loss function was equivalent to a supervised loss function with paired noisy and clean samples when the noise in the different slices from a single scan was uncorrelated and zero-mean. Further experiments on Mayo LDCT dataset and a realistic pig head were carried out and demonstrated superior performance over existing unsupervised methods.
Achieving high-quality reconstructions from low-dose computed tomography (LDCT) measurements is of much importance in clinical settings. Model-based image reconstruction methods have been proven to be effective in removing artifacts in LDCT. In this work, we propose an approach to learn a rich two-layer clustering-based sparsifying transform model (MCST2), where image patches and their subsequent feature maps (filter residuals) are clustered into groups with different learned sparsifying filters per group. We investigate a penalized weighted least squares (PWLS) approach for LDCT reconstruction incorporating learned MCST2 priors. Experimental results show the superior performance of the proposed PWLS-MCST2 approach compared to other related recent schemes.
Low-dose computed tomography (LDCT) scans, which can effectively alleviate the radiation problem, will degrade the imaging quality. In this paper, we propose a novel LDCT reconstruction network that unrolls the iterative scheme and performs in both image and manifold spaces. Because patch manifolds of medical images have low-dimensional structures, we can build graphs from the manifolds. Then, we simultaneously leverage the spatial convolution to extract the local pixel-level features from the images and incorporate the graph convolution to analyze the nonlocal topological features in manifold space. The experiments show that our proposed method outperforms both the quantitative and qualitative aspects of state-of-the-art methods. In addition, aided by a projection loss component, our proposed method also demonstrates superior performance for semi-supervised learning. The network can remove most noise while maintaining the details of only 10% (40 slices) of the training data labeled.
Recently, the use of deep learning techniques to reconstruct computed tomography (CT) images has become a hot research topic, including sinogram domain methods, image domain methods and sinogram domain to image domain methods. All these methods have achieved favorable results. In this article, we have studied the important functions of fully connected layers used in the sinogram domain to image domain approach. First, we present a simple domain mapping neural networks. Then, we analyze the role of the fully connected layers of these networks and visually analyze the weights of the fully connected layers. Finally, by visualizing the weights of the fully connected layer, we found that the main role of the fully connected layer is to implement the back projection function in CT reconstruction. This finding has important implications for the use of deep learning techniques to reconstruct computed tomography (CT) images. For example, since fully connected layer weights need to consume huge memory resources, the back-projection function can be implemented by using analytical algorithms to avoid resource occupation, which can be embedded in the entire network.
Signal models based on sparse representations have received considerable attention in recent years. On the other hand, deep models consisting of a cascade of functional layers, commonly known as deep neural networks, have been highly successful for the task of object classification and have been recently introduced to image reconstruction. In this work, we develop a new image reconstruction approach based on a novel multi-layer model learned in an unsupervised manner by combining both sparse representations and deep models. The proposed framework extends the classical sparsifying transform model for images to a Multi-lAyer Residual Sparsifying transform (MARS) model, wherein the transform domain data are jointly sparsified over layers. We investigate the application of MARS models learned from limited regular-dose images for low-dose CT reconstruction using Penalized Weighted Least Squares (PWLS) optimization. We propose new formulations for multi-layer transform learning and image reconstruction. We derive an efficient block coordinate descent algorithm to learn the transforms across layers, in an unsupervised manner from limited regular-dose images. The learned model is then incorporated into the low-dose image reconstruction phase. Low-dose CT experimental results with both the XCAT phantom and Mayo Clinic data show that the MARS model outperforms conventional methods such as FBP and PWLS methods based on the edge-preserving (EP) regularizer in terms of two numerical metrics (RMSE and SSIM) and noise suppression. Compared with the single-layer learned transform (ST) model, the MARS model performs better in maintaining some subtle details.
The extensive use of medical CT has raised a public concern over the radiation dose to the patient. Reducing the radiation dose leads to increased CT image noise and artifacts, which can adversely affect not only the radiologists judgement but also the performance of downstream medical image analysis tasks. Various low-dose CT denoising methods, especially the recent deep learning based approaches, have produced impressive results. However, the existing denoising methods are all downstream-task-agnostic and neglect the diverse needs of the downstream applications. In this paper, we introduce a novel Task-Oriented Denoising Network (TOD-Net) with a task-oriented loss leveraging knowledge from the downstream tasks. Comprehensive empirical analysis shows that the task-oriented loss complements other task agnostic losses by steering the denoiser to enhance the image quality in the task related regions of interest. Such enhancement in turn brings general boosts on the performance of various methods for the downstream task. The presented work may shed light on the future development of context-aware image denoising methods.