ﻻ يوجد ملخص باللغة العربية
Data-driven model-independent reconstructions of the dark energy equation of state $w(z)$ are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-$alpha$ data. These reconstructions identify the $w(z)$ behaviour supported by the data and show a bifurcation of the equation of state posterior in the range $1.5{<}z{<}3$. Although the concordance $Lambda$CDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as `phantom dark energy) is identified within the $1.5 sigma$ confidence intervals of the posterior distribution. To identify the power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback--Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-$alpha$ datasets. Further, SNIa and BAO constrain most strongly around redshift range $0.1-0.5$, whilst the Lyman-$alpha$ data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the $Lambda$CDM model was favoured at more than $2$ log-units in Bayes factors over all the models tested despite the weakly preferred $w(z)$ structure in the data.
Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average app
Variational Inference (VI) is a popular alternative to asymptotically exact sampling in Bayesian inference. Its main workhorse is optimization over a reverse Kullback-Leibler divergence (RKL), which typically underestimates the tail of the posterior
Kullback-Leibler (KL) divergence is one of the most important divergence measures between probability distributions. In this paper, we investigate the properties of KL divergence between Gaussians. Firstly, for any two $n$-dimensional Gaussians $math
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of com