ترغب بنشر مسار تعليمي؟ اضغط هنا

PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities

43   0   0.0 ( 0 )
 نشر من قبل Alessandro Mirone
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present the PyHST2 code which is in service at ESRF for phase-contrast and absorption tomography. This code has been engineered to sustain the high data flow typical of the third generation synchrotron facilities (10 terabytes per experiment) by adopting a distributed and pipelined architecture. The code implements, beside a default filtered backprojection reconstruction, iterative reconstruction techniques with a-priori knowledge. These latter are used to improve the reconstruction quality or in order to reduce the required data volume and reach a given quality goal. The implemented a-priori knowledge techniques are based on the total variation penalisation and a new recently found convex functional which is based on overlapping patches. We give details of the different methods and their implementations while the code is distributed under free license. We provide methods for estimating, in the absence of ground-truth data, the optimal parameters values for a-priori techniques.

قيم البحث

اقرأ أيضاً

Tomographic image reconstruction with deep learning is an emerging field, but a recent landmark study reveals that several deep reconstruction networks are unstable for computed tomography (CT) and magnetic resonance imaging (MRI). Specifically, thre e kinds of instabilities were reported: (1) strong image artefacts from tiny perturbations, (2) small features missing in a deeply reconstructed image, and (3) decreased imaging performance with increased input data. On the other hand, compressed sensing (CS) inspired reconstruction methods do not suffer from these instabilities because of their built-in kernel awareness. For deep reconstruction to realize its full potential and become a mainstream approach for tomographic imaging, it is thus critically important to meet this challenge by stabilizing deep reconstruction networks. Here we propose an Analytic Compressed Iterative Deep (ACID) framework to address this challenge. ACID synergizes a deep reconstruction network trained on big data, kernel awareness from CS-inspired processing, and iterative refinement to minimize the data residual relative to real measurement. Our study demonstrates that the deep reconstruction using ACID is accurate and stable, and sheds light on the converging mechanism of the ACID iteration under a Bounded Relative Error Norm (BREN) condition. In particular, the study shows that ACID-based reconstruction is resilient against adversarial attacks, superior to classic sparsity-regularized reconstruction alone, and eliminates the three kinds of instabilities. We anticipate that this integrative data-driven approach will help promote development and translation of deep tomographic image reconstruction networks into clinical applications.
Low-dose tomography is highly preferred in medical procedures for its reduced radiation risk when compared to standard-dose Computed Tomography (CT). However, the lower the intensity of X-rays, the higher the acquisition noise and hence the reconstru ctions suffer from artefacts. A large body of work has focussed on improving the algorithms to minimize these artefacts. In this work, we propose two new techniques, rescaled non-linear least squares and Poisson-Gaussian convolution, that reconstruct the underlying image making use of an accurate or near-accurate statistical model of the noise in the projections. We also propose a reconstruction method when prior knowledge of the underlying object is available in the form of templates. This is applicable to longitudinal studies wherein the same object is scanned multiple times to observe the changes that evolve in it over time. Our results on 3D data show that prior information can be used to compensate for the low-dose artefacts, and we demonstrate that it is possible to simultaneously prevent the prior from adversely biasing the reconstructions of new changes in the test object, via a method called ``re-irradiation. Additionally, we also present two techniques for automated tuning of the regularization parameters for tomographic inversion.
The need for tomographic reconstruction from sparse measurements arises when the measurement process is potentially harmful, needs to be rapid, or is uneconomical. In such cases, information from previous longitudinal scans of the same object helps t o reconstruct the current object while requiring significantly fewer updating measurements. Our work is based on longitudinal data acquisition scenarios where we wish to study new changes that evolve within an object over time, such as in repeated scanning for disease monitoring, or in tomography-guided surgical procedures. While this is easily feasible when measurements are acquired from a large number of projection views, it is challenging when the number of views is limited. If the goal is to track the changes while simultaneously reducing sub-sampling artefacts, we propose (1) acquiring measurements from a small number of views and using a global unweighted prior-based reconstruction. If the goal is to observe details of new changes, we propose (2) acquiring measurements from a moderate number of views and using a more involved reconstruction routine. We show that in the latter case, a weighted technique is necessary in order to prevent the prior from adversely affecting the reconstruction of new structures that are absent in any of the earlier scans. The reconstruction of new regions is safeguarded from the bias of the prior by computing regional weights that moderate the local influence of the priors. We are thus able to effectively reconstruct both the old and the new structures in the test. In addition to testing on simulated data, we have validated the efficacy of our method on real tomographic data. The results demonstrate the use of both unweighted and weighted priors in different scenarios.
Properties of Superiorized Preconditioned Conjugate Gradient (SupPCG) algorithms in image reconstruction from projections are examined. Least squares (LS) is usually chosen for measuring data-inconsistency in these inverse problems. Preconditioned Co njugate Gradient algorithms are fast methods for finding an LS solution. However, for ill-posed problems, such as image reconstruction, an LS solution may not provide good image quality. This can be taken care of by superiorization. A superiorized algorithm leads to images with the value of a secondary criterion (a merit function such as the total variation) improved as compared to images with similar data-inconsistency obtained by the algorithm without superiorization. Numerical experimentation shows that SupPCG can lead to high-quality reconstructions within a remarkably short time. A theoretical analysis is also provided.
Faraday tomography offers crucial information on the magnetized astronomical objects, such as quasars, galaxies, or galaxy clusters, by observing its magnetoionic media. The observed linear polarization spectrum is inverse Fourier transformed to obta in the Faraday dispersion function (FDF), providing us a tomographic distribution of the magnetoionic media along the line of sight. However, this transform gives a poor reconstruction of the FDF because of the instruments limited wavelength coverage. The current Faraday tomography techniques inability to reliably solve the above inverse problem has noticeably plagued cosmic magnetism studies. We propose a new algorithm inspired by the well-studied area of signal restoration, called the Constraining and Restoring iterative Algorithm for Faraday Tomography (CRAFT). This iterative model-independent algorithm is computationally inexpensive and only requires weak physically-motivated assumptions to produce high fidelity FDF reconstructions. We demonstrate an application for a realistic synthetic model FDF of the Milky Way, where CRAFT shows greater potential over other popular model-independent techniques. The dependence of observational frequency coverage on the various techniques reconstruction performance is also demonstrated for a simpler FDF. CRAFT exhibits improvements even over model-dependent techniques (i.e., QU-fitting) by capturing complex multi-scale features of the FDF amplitude and polarization angle variations within a source. The proposed approach will be of utmost importance for future cosmic magnetism studies, especially with broadband polarization data from the Square Kilometre Array and its precursors. We make the CRAFT code publicly available.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا