ترغب بنشر مسار تعليمي؟ اضغط هنا

Information Gains from Cosmological Probes

132   0   0.0 ( 0 )
 نشر من قبل Sebastian Grandis
 تاريخ النشر 2015
  مجال البحث فيزياء
والبحث باللغة English
 تأليف S. Grandis




اسأل ChatGPT حول البحث

In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entropy to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the surprise, i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release. We consider the parameters of the flat $Lambda$CDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter $w$. We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and {$rm H_0$} measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 $sigma$ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.


قيم البحث

اقرأ أيضاً

The combination of multiple observational probes has long been advocated as a powerful technique to constrain cosmological parameters, in particular dark energy. The Dark Energy Survey has measured 207 spectroscopically--confirmed Type Ia supernova l ightcurves; the baryon acoustic oscillation feature; weak gravitational lensing; and galaxy clustering. Here we present combined results from these probes, deriving constraints on the equation of state, $w$, of dark energy and its energy density in the Universe. Independently of other experiments, such as those that measure the cosmic microwave background, the probes from this single photometric survey rule out a Universe with no dark energy, finding $w=-0.80^{+0.09}_{-0.11}$. The geometry is shown to be consistent with a spatially flat Universe, and we obtain a constraint on the baryon density of $Omega_b=0.069^{+0.009}_{-0.012}$ that is independent of early Universe measurements. These results demonstrate the potential power of large multi-probe photometric surveys and pave the way for order of magnitude advances in our constraints on properties of dark energy and cosmology over the next decade.
The combination of different cosmological probes offers stringent tests of the $Lambda$CDM model and enhanced control of systematics. For this purpose, we present an extension of the lightcone generator UFalcon first introduced in Sgier et al. 2019 ( arXiv:1801.05745), enabling the simulation of a self-consistent set of maps for different cosmological probes. Each realization is generated from the same underlying simulated density field, and contains full-sky maps of different probes, namely weak lensing shear, galaxy overdensity including RSD, CMB lensing, and CMB temperature anisotropies from the ISW effect. The lightcone generation performed by UFalcon is parallelized and based on the replication of a large periodic volume simulated with the GPU-accelerated $N$-Body code PkdGrav3. The post-processing to construct the lightcones requires only a runtime of about 1 walltime-hour corresponding to about 100 CPU-hours. We use a randomization procedure to increase the number of quasi-independent full-sky UFalcon map-realizations, which enables us to compute an accurate multi-probe covariance matrix. Using this framework, we forecast cosmological parameter constraints by performing a multi-probe likelihood analysis for a combination of simulated future stage-IV-like surveys. We find that the inclusion of the cross-correlations between the probes significantly increases the information gain in the parameter constraints. We also find that the use of a non-Gaussian covariance matrix is increasingly important, as more probes and cross-correlation power spectra are included. A version of the UFalcon package currently including weak gravitational lensing is publicly available.
There is a renewed interest in constraining the sum of the masses of the three neutrino flavours by using cosmological measurements. Solar, atmospheric, and reactor neutrino experiments have confirmed neutrino oscillations, implying that neutrinos ha ve non-zero mass, but without pinning down their absolute masses. While it is established that the effect of light neutrinos on the evolution of cosmic structure is small, the upper limits derived from large-scale structure could help significantly to constrain the absolute scale of the neutrino masses. It is also important to know the sum of neutrino masses as it is degenerate with the values of other cosmological parameters, e.g. the amplitude of fluctuations and the primordial spectral index. A summary of cosmological neutrino mass limits is given. Current results from cosmology set an upper limit on the sum of the neutrino masses of ~1 eV, somewhat depending on the data sets used in the analyses and assumed priors on cosmological parameters. It is important to emphasize that the total neutrino mass (`hot dark matter) is derived assuming that the other components in the universe are baryons, cold dark matter and dark energy. We assess the impact of neutrino masses on the matter power spectrum, the cosmic microwave background, peculiar velocities and gravitational lensing. We also discuss future methods to improve the mass upper limits by an order of magnitude.
We performed the photometric analysis of M2 and M92 globular clusters in g and r bands of SLOAN photometric system. We transformed these g and r bands into BV bands of Johnson-Cousins photometric system and built the color magnitude diagram (CMD). We estimated the age, and metallicity of both the clusters, by fitting Padova isochrones of different age and metallicities onto the CMD. We studied Einstein and de Sitter model, bench mark model, the cosmological parameters by WMAP and Planck surveys. Finally, we compared estimated age of globular clusters to the ages from the cosmological models and cosmological parameters values of WMAP and Planck surveys.
Recently, there have been two landmark discoveries of gravitationally lensed supernovae: the first multiply-imaged SN, Refsdal, and the first Type Ia SN resolved into multiple images, SN iPTF16geu. Fitting the multiple light curves of such objects ca n deliver measurements of the lensing time delays, which are the difference in arrival times for the separate images. These measurements provide precise tests of lens models or constraints on the Hubble constant and other cosmological parameters that are independent of the local distance ladder. Over the next decade, accurate time delay measurements will be needed for the tens to hundreds of lensed SNe to be found by wide-field time-domain surveys such as LSST and WFIRST. We have developed an open source software package for simulations and time delay measurements of multiply-imaged SNe, including an improved characterization of the uncertainty caused by microlensing. We describe simulations using the package that suggest a before-peak detection of the leading image enables a more accurate and precise time delay measurement (by ~1 and ~2 days, respectively), when compared to an after-peak detection. We also conclude that fitting the effects of microlensing without an accurate prior often leads to biases in the time delay measurement and over-fitting to the data, but that employing a Gaussian Process Regression (GPR) technique is sufficient for determining the uncertainty due to microlensing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا