Unsupervised Sparse-view Backprojection via Convolutional and Spatial Transformer Networks


الملخص بالإنكليزية

Many imaging technologies rely on tomographic reconstruction, which requires solving a multidimensional inverse problem given a finite number of projections. Backprojection is a popular class of algorithm for tomographic reconstruction, however it typically results in poor image reconstructions when the projection angles are sparse and/or if the sensors characteristics are not uniform. Several deep learning based algorithms have been developed to solve this inverse problem and reconstruct the image using a limited number of projections. However these algorithms typically require examples of the ground-truth (i.e. examples of reconstructed images) to yield good performance. In this paper, we introduce an unsupervised sparse-view backprojection algorithm, which does not require ground-truth. The algorithm consists of two modules in a generator-projector framework; a convolutional neural network and a spatial transformer network. We evaluated our algorithm using computed tomography (CT) images of the human chest. We show that our algorithm significantly out-performs filtered backprojection when the projection angles are very sparse, as well as when the sensor characteristics vary for different angles. Our approach has practical applications for medical imaging and other imaging modalities (e.g. radar) where sparse and/or non-uniform projections may be acquired due to time or sampling constraints.

تحميل البحث