ترغب بنشر مسار تعليمي؟ اضغط هنا

Constraining the dark energy equation of state using Bayes theorem and the Kullback-Leibler divergence

64   0   0.0 ( 0 )
 نشر من قبل Sonke Hee
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Data-driven model-independent reconstructions of the dark energy equation of state $w(z)$ are presented using Planck 2015 era CMB, BAO, SNIa and Lyman-$alpha$ data. These reconstructions identify the $w(z)$ behaviour supported by the data and show a bifurcation of the equation of state posterior in the range $1.5{<}z{<}3$. Although the concordance $Lambda$CDM model is consistent with the data at all redshifts in one of the bifurcated spaces, in the other a supernegative equation of state (also known as `phantom dark energy) is identified within the $1.5 sigma$ confidence intervals of the posterior distribution. To identify the power of different datasets in constraining the dark energy equation of state, we use a novel formulation of the Kullback--Leibler divergence. This formalism quantifies the information the data add when moving from priors to posteriors for each possible dataset combination. The SNIa and BAO datasets are shown to provide much more constraining power in comparison to the Lyman-$alpha$ datasets. Further, SNIa and BAO constrain most strongly around redshift range $0.1-0.5$, whilst the Lyman-$alpha$ data constrains weakly over a broader range. We do not attribute the supernegative favouring to any particular dataset, and note that the $Lambda$CDM model was favoured at more than $2$ log-units in Bayes factors over all the models tested despite the weakly preferred $w(z)$ structure in the data.

قيم البحث

اقرأ أيضاً

Renyi divergence is related to Renyi entropy much like Kullback-Leibler divergence is related to Shannons entropy, and comes up in many settings. It was introduced by Renyi as a measure of information that satisfies almost the same axioms as Kullback -Leibler divergence, and depends on a parameter that is called its order. In particular, the Renyi divergence of order 1 equals the Kullback-Leibler divergence. We review and extend the most important properties of Renyi divergence and Kullback-Leibler divergence, including convexity, continuity, limits of $sigma$-algebras and the relation of the special order 0 to the Gaussian dichotomy and contiguity. We also show how to generalize the Pythagorean inequality to orders different from 1, and we extend the known equivalence between channel capacity and minimax redundancy to continuous channel inputs (for all orders) and present several other minimax results.
We propose a method to fuse posterior distributions learned from heterogeneous datasets. Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors and proceeds using a simple assign-and-average app roach. The components of the dataset posteriors are assigned to the proposed global model components by solving a regularized variant of the assignment problem. The global components are then updated based on these assignments by their mean under a KL divergence. For exponential family variational distributions, our formulation leads to an efficient non-parametric algorithm for computing the fused model. Our algorithm is easy to describe and implement, efficient, and competitive with state-of-the-art on motion capture analysis, topic modeling, and federated learning of Bayesian neural networks.
Variational Inference (VI) is a popular alternative to asymptotically exact sampling in Bayesian inference. Its main workhorse is optimization over a reverse Kullback-Leibler divergence (RKL), which typically underestimates the tail of the posterior leading to miscalibration and potential degeneracy. Importance sampling (IS), on the other hand, is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures. The quality of IS crucially depends on the choice of the proposal distribution. Ideally, the proposal distribution has heavier tails than the target, which is rarely achievable by minimizing the RKL. We thus propose a novel combination of optimization and sampling techniques for approximate Bayesian inference by constructing an IS proposal distribution through the minimization of a forward KL (FKL) divergence. This approach guarantees asymptotic consistency and a fast convergence towards both the optimal IS estimator and the optimal variational approximation. We empirically demonstrate on real data that our method is competitive with variational boosting and MCMC.
Kullback-Leibler (KL) divergence is one of the most important divergence measures between probability distributions. In this paper, we investigate the properties of KL divergence between Gaussians. Firstly, for any two $n$-dimensional Gaussians $math cal{N}_1$ and $mathcal{N}_2$, we find the supremum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)leq epsilon$ for $epsilon>0$. This reveals the approximate symmetry of small KL divergence between Gaussians. We also find the infimum of $KL(mathcal{N}_1||mathcal{N}_2)$ when $KL(mathcal{N}_2||mathcal{N}_1)geq M$ for $M>0$. Secondly, for any three $n$-dimensional Gaussians $mathcal{N}_1, mathcal{N}_2$ and $mathcal{N}_3$, we find a bound of $KL(mathcal{N}_1||mathcal{N}_3)$ if $KL(mathcal{N}_1||mathcal{N}_2)$ and $KL(mathcal{N}_2||mathcal{N}_3)$ are bounded. This reveals that the KL divergence between Gaussians follows a relaxed triangle inequality. Importantly, all the bounds in the theorems presented in this paper are independent of the dimension $n$.
We introduce hardness in relative entropy, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible entropy, two forms of com putational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, hardness in relative entropy unifies the latter two notions of computational entropy and sheds light on the apparent duality between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC 12).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا