ترغب بنشر مسار تعليمي؟ اضغط هنا

A Convex Functional for Image Denoising based on Patches with Constrained Overlaps and its vectorial application to Low Dose Differential Phase Tomography

45   0   0.0 ( 0 )
 نشر من قبل Alessandro Mirone
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We solve the image denoising problem with a dictionary learning technique by writing a convex functional of a new form. This functional contains beside the usual sparsity inducing term and fidelity term, a new term which induces similarity between overlapping patches in the overlap regions. The functional depends on two free regularization parameters: a coefficient multiplying the sparsity-inducing $L_{1}$ norm of the patch basis functions coefficients, and a coefficient multiplying the $L_{2}$ norm of the differences between patches in the overlapping regions. The solution is found by applying the iterative proximal gradient descent method with FISTA acceleration. In the case of tomography reconstruction we calculate the gradient by applying projection of the solution and its error backprojection at each iterative step. We study the quality of the solution, as a function of the regularization parameters and noise, on synthetic datas for which the solution is a-priori known. We apply the method on experimental data in the case of Differential Phase Tomography. For this case we use an original approach which consists in using vectorial patches, each patch having two components: one per each gradient component. The resulting algorithm, implemented in the ESRF tomography reconstruction code PyHST, results to be robust, efficient, and well adapted to strongly reduce the required dose and the number of projections in medical tomography.


قيم البحث

اقرأ أيضاً

The extensive use of medical CT has raised a public concern over the radiation dose to the patient. Reducing the radiation dose leads to increased CT image noise and artifacts, which can adversely affect not only the radiologists judgement but also t he performance of downstream medical image analysis tasks. Various low-dose CT denoising methods, especially the recent deep learning based approaches, have produced impressive results. However, the existing denoising methods are all downstream-task-agnostic and neglect the diverse needs of the downstream applications. In this paper, we introduce a novel Task-Oriented Denoising Network (TOD-Net) with a task-oriented loss leveraging knowledge from the downstream tasks. Comprehensive empirical analysis shows that the task-oriented loss complements other task agnostic losses by steering the denoiser to enhance the image quality in the task related regions of interest. Such enhancement in turn brings general boosts on the performance of various methods for the downstream task. The presented work may shed light on the future development of context-aware image denoising methods.
Synchrotron-based X-ray computed tomography is widely used for investigating inner structures of specimens at high spatial resolutions. However, potential beam damage to samples often limits the X-ray exposure during tomography experiments. Proposed strategies for eliminating beam damage also decrease reconstruction quality. Here we present a deep learning-based method to enhance low-dose tomography reconstruction via a hybrid-dose acquisition strategy composed of extremely sparse-view normal-dose projections and full-view low-dose projections. Corresponding image pairs are extracted from low-/normal-dose projections to train a deep convolutional neural network, which is then applied to enhance full-view noisy low-dose projections. Evaluation on two experimental datasets under different hybrid-dose acquisition conditions show significantly improved structural details and reduced noise levels compared to uniformly distributed acquisitions with the same number of total dosage. The resulting reconstructions also preserve more structural information than reconstructions processed with traditional analytical and regularization-based iterative reconstruction methods from uniform acquisitions. Our performance comparisons show that our implementation, HDrec, can perform denoising of a real-world experimental data 410x faster than the state-of-the-art Xlearn method while providing better quality. This framework can be applied to other tomographic or scanning based X-ray imaging techniques for enhanced analysis of dose-sensitive samples and has great potential for studying fast dynamic processes.
We propose a set of iterative regularization algorithms for the TV-Stokes model to restore images from noisy images with Gaussian noise. These are some extensions of the iterative regularization algorithm proposed for the classical Rudin-Osher-Fatemi (ROF) model for image reconstruction, a single step model involving a scalar field smoothing, to the TV-Stokes model for image reconstruction, a two steps model involving a vector field smoothing in the first and a scalar field smoothing in the second. The iterative regularization algorithms proposed here are Richardsons iteration like. We have experimental results that show improvement over the original method in the quality of the restored image. Convergence analysis and numerical experiments are presented.
LDCT has drawn major attention in the medical imaging field due to the potential health risks of CT-associated X-ray radiation to patients. Reducing the radiation dose, however, decreases the quality of the reconstructed images, which consequently co mpromises the diagnostic performance. Various deep learning techniques have been introduced to improve the image quality of LDCT images through denoising. GANs-based denoising methods usually leverage an additional classification network, i.e. discriminator, to learn the most discriminate difference between the denoised and normal-dose images and, hence, regularize the denoising model accordingly; it often focuses either on the global structure or local details. To better regularize the LDCT denoising model, this paper proposes a novel method, termed DU-GAN, which leverages U-Net based discriminators in the GANs framework to learn both global and local difference between the denoised and normal-dose images in both image and gradient domains. The merit of such a U-Net based discriminator is that it can not only provide the per-pixel feedback to the denoising network through the outputs of the U-Net but also focus on the global structure in a semantic level through the middle layer of the U-Net. In addition to the adversarial training in the image domain, we also apply another U-Net based discriminator in the image gradient domain to alleviate the artifacts caused by photon starvation and enhance the edge of the denoised CT images. Furthermore, the CutMix technique enables the per-pixel outputs of the U-Net based discriminator to provide radiologists with a confidence map to visualize the uncertainty of the denoised results, facilitating the LDCT-based screening and diagnosis. Extensive experiments on the simulated and real-world datasets demonstrate superior performance over recently published methods both qualitatively and quantitatively.
81 - Bin Wu , Xue-Cheng Tai , 2020
The paper presents a fully coupled TV-Stokes model, and propose an algorithm based on alternating minimization of the objective functional whose first iteration is exactly the modified TV-Stokes model proposed earlier. The model is a generalization o f the second order Total Generalized Variation model. A convergence analysis is given.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا