Do you want to publish a course? Click here

Quantifying Concordance in Cosmology

48   0   0.0 ( 0 )
 Added by Sebastian Seehars
 Publication date 2015
  fields Physics
and research's language is English




Ask ChatGPT about the research

Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between datasets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13 and Planck 15 constraints on the $Lambda$CDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S = 17.6 bits, implying a deviation from consistency at 99.8% confidence) is due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Planck 15 constraints deviate from the Planck 13 results (S = 56.3 bits), primarily due to a shift in the same direction. The Surprise between WMAP and Planck consequently disappears when moving to Planck 15 (S = -5.1 bits). This means that, unlike Planck 13, Planck 15 is not in tension with WMAP 9. These results illustrate the advantages of the relative entropy and the Surprise for quantifying the disagreement between cosmological experiments and more generally as an information metric for cosmology.



rate research

Read More

We propose a new intuitive metric for evaluating the tension between two experiments, and apply it to several data sets. While our metric is non-optimal, if evidence of tension is detected, this evidence is robust and easy to interpret. Assuming a flat $Lambda$CDM cosmological model, we find that there is a modest $2.2sigma$ tension between the DES Year 1 results and the ${it Planck}$ measurements of the Cosmic Microwave Background (CMB). This tension is driven by the difference between the amount of structure observed in the late-time Universe and that predicted from fitting the ${it Planck}$ data, and appears to be unrelated to the tension between ${it Planck}$ and local esitmates of the Hubble rate. In particular, combining DES, Baryon Acoustic Oscillations (BAO), Big-Bang Nucleosynthesis (BBN), and supernovae (SNe) measurements recovers a Hubble constant and sound horizon consistent with ${it Planck}$, and in tension with local distance-ladder measurements. If the tension between these various data sets persists, it is likely that reconciling ${it all}$ current data will require breaking the flat $Lambda$CDM model in at least two different ways: one involving new physics in the early Universe, and one involving new late-time Universe physics.
A Large Quasar Group (LQG) of particularly large size and high membership has been identified in the DR7QSO catalogue of the Sloan Digital Sky Survey. It has characteristic size (volume^1/3) ~ 500 Mpc (proper size, present epoch), longest dimension ~ 1240 Mpc, membership of 73 quasars, and mean redshift <z> = 1.27. In terms of both size and membership it is the most extreme LQG found in the DR7QSO catalogue for the redshift range 1.0 <= z <= 1.8 of our current investigation. Its location on the sky is ~ 8.8 deg north (~ 615 Mpc projected) of the Clowes & Campusano LQG at the same redshift, <z> = 1.28, which is itself one of the more extreme examples. Their boundaries approach to within ~ 2 deg (~ 140 Mpc projected). This new, huge LQG appears to be the largest structure currently known in the early universe. Its size suggests incompatibility with the Yadav et al. scale of homogeneity for the concordance cosmology, and thus challenges the assumption of the cosmological principle.
223 - C. L. Bennett 2014
The determination of the Hubble constant has been a central goal in observational astrophysics for nearly 100 years. Extraordinary progress has occurred in recent years on two fronts: the cosmic distance ladder measurements at low redshift and cosmic microwave background (CMB) measurements at high redshift. The CMB is used to predict the current expansion rate through a best-fit cosmological model. Complementary progress has been made with baryon acoustic oscillation (BAO) measurements at relatively low redshifts. While BAO data do not independently determine a Hubble constant, they are important for constraints on possible solutions and checks on cosmic consistency. A precise determination of the Hubble constant is of great value, but it is more important to compare the high and low redshift measurements to test our cosmological model. Significant tension would suggest either uncertainties not accounted for in the experimental estimates, or the discovery of new physics beyond the standard model of cosmology. In this paper we examine in detail the tension between the CMB, BAO, and cosmic distance ladder data sets. We find that these measurements are consistent within reasonable statistical expectations, and we combine them to determine a best-fit Hubble constant of 69.6+/-0.7 km/s/Mpc. This value is based upon WMAP9+SPT+ACT+6dFGS+BOSS/DR11+H_0/Riess; we explore alternate data combinations in the text. The combined data constrain the Hubble constant to 1%, with no compelling evidence for new physics.
We present the evolution of dark matter halos in six large cosmological N-body simulations, called the $ u^2$GC (New Numerical Galaxy Catalog) simulations on the basis of the LCDM cosmology consistent with observational results obtained by the Planck satellite. The largest simulation consists of $8192^3$ (550 billion) dark matter particles in a box of $1.12 , h^{-1} rm Gpc$ (a mass resolution of $2.20 times 10^{8} , h^{-1} M_{odot}$). Among simulations utilizing boxes larger than $1 , h^{-1} rm Gpc$, our simulation yields the highest resolution simulation that has ever been achieved. A $ u^2$GC simulation with the smallest box consists of eight billions particles in a box of $70 , h^{-1} rm Mpc$ (a mass resolution of $3.44 times 10^{6} , h^{-1} M_{odot}$). These simulations can follow the evolution of halos over masses of eight orders of magnitude, from small dwarf galaxies to massive clusters. Using the unprecedentedly high resolution and powerful statistics of the $ u^2$GC simulations, we provide statistical results of the halo mass function, mass accretion rate, formation redshift, and merger statistics, and present accurate fitting functions for the Planck cosmology. By combining the $ u^2$GC simulations with our new semi-analytic galaxy formation model, we are able to prepare mock catalogs of galaxies and active galactic nuclei, which will be made publicly available in the near future.
We present an investigation of the horizon and its effect on global 21-cm observations and analysis. We find that the horizon cannot be ignored when modeling low frequency observations. Even if the sky and antenna beam are known exactly, forward models cannot fully describe the beam-weighted foreground component without accurate knowledge of the horizon. When fitting data to extract the 21-cm signal, a single time-averaged spectrum or independent multi-spectrum fits may be able to compensate for the bias imposed by the horizon. However, these types of fits lack constraining power on the 21-cm signal, leading to large uncertainties on the signal extraction, in some cases larger in magnitude than the 21-cm signal itself. A significant decrease in signal uncertainty can be achieved by performing multi-spectrum fits in which the spectra are modeled simultaneously with common parameters. The cost of this greatly increased constraining power, however, is that the time dependence of the horizons effect, which is more complex than its spectral dependence, must be precisely modeled to achieve a good fit. To aid in modeling the horizon, we present an algorithm and Python package for calculating the horizon profile from a given observation site using elevation data. We also address several practical concerns such as pixelization error, uncertainty in the horizon profile, and foreground obstructions such as surrounding buildings and vegetation. We demonstrate that our training set-based analysis pipeline can account for all of these factors to model the horizon well enough to precisely extract the 21-cm signal from simulated observations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا