ﻻ يوجد ملخص باللغة العربية
We study the problem of spectrum estimation from transmission data of a known phantom. The goal is to reconstruct an x-ray spectrum that can accurately model the x-ray transmission curves and reflects a realistic shape of the typical energy spectra of the CT system. To this end, spectrum estimation is posed as an optimization problem with x-ray spectrum as unknown variables, and a Kullback-Leibler (KL) divergence constraint is employed to incorporate prior knowledge of the spectrum and enhance numerical stability of the estimation process. The formulated constrained optimization problem is convex and can be solved efficiently by use of the exponentiated-gradient (EG) algorithm. We demonstrate the effectiveness of the proposed approach on the simulated and experimental data. The comparison to the expectation-maximization (EM) method is also discussed. In simulations, the proposed algorithm is seen to yield x-ray spectra that closely match the ground truth and represent the attenuation process of x-ray photons in materials, both included and not included in the estimation process. In experiments, the calculated transmission curve is in good agreement with the measured transmission curve, and the estimated spectra exhibits physically realistic looking shapes. The results further show the comparable performance between the proposed optimization-based approach and EM. In conclusion, our formulation of a constrained optimization provides an interpretable and flexible framework for spectrum estimation. Moreover, a KL-divergence constraint can include a prior spectrum and appears to capture important features of x-ray spectrum, allowing accurate and robust estimation of x-ray spectrum in CT imaging.
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of com
Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback
This paper addresses a new interpretation of reinforcement learning (RL) as reverse Kullback-Leibler (KL) divergence optimization, and derives a new optimization method using forward KL divergence. Although RL originally aims to maximize return indir
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average app
Bayesian nonparametric statistics is an area of considerable research interest. While recently there has been an extensive concentration in developing Bayesian nonparametric procedures for model checking, the use of the Dirichlet process, in its simp