ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Lightcones for Combined Cosmological Probes

112   0   0.0 ( 0 )
 نشر من قبل Raphael Sgier
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The combination of different cosmological probes offers stringent tests of the $Lambda$CDM model and enhanced control of systematics. For this purpose, we present an extension of the lightcone generator UFalcon first introduced in Sgier et al. 2019 (arXiv:1801.05745), enabling the simulation of a self-consistent set of maps for different cosmological probes. Each realization is generated from the same underlying simulated density field, and contains full-sky maps of different probes, namely weak lensing shear, galaxy overdensity including RSD, CMB lensing, and CMB temperature anisotropies from the ISW effect. The lightcone generation performed by UFalcon is parallelized and based on the replication of a large periodic volume simulated with the GPU-accelerated $N$-Body code PkdGrav3. The post-processing to construct the lightcones requires only a runtime of about 1 walltime-hour corresponding to about 100 CPU-hours. We use a randomization procedure to increase the number of quasi-independent full-sky UFalcon map-realizations, which enables us to compute an accurate multi-probe covariance matrix. Using this framework, we forecast cosmological parameter constraints by performing a multi-probe likelihood analysis for a combination of simulated future stage-IV-like surveys. We find that the inclusion of the cross-correlations between the probes significantly increases the information gain in the parameter constraints. We also find that the use of a non-Gaussian covariance matrix is increasingly important, as more probes and cross-correlation power spectra are included. A version of the UFalcon package currently including weak gravitational lensing is publicly available.

قيم البحث

اقرأ أيضاً

The advent of a new generation of large-scale galaxy surveys is pushing cosmological numerical simulations in an uncharted territory. The simultaneous requirements of high resolution and very large volume pose serious technical challenges, due to the ir computational and data storage demand. In this paper, we present a novel approach dubbed Dynamic Zoom Simulations -- or DZS -- developed to tackle these issues. Our method is tailored to the production of lightcone outputs from N-body numerical simulations, which allow for a more efficient storage and post-processing compared to standard comoving snapshots, and more directly mimic the format of survey data. In DZS, the resolution of the simulation is dynamically decreased outside the lightcone surface, reducing the computational work load, while simultaneously preserving the accuracy inside the lightcone and the large-scale gravitational field. We show that our approach can achieve virtually identical results to traditional simulations at half of the computational cost for our largest box. We also forecast this speedup to increase up to a factor of 5 for larger and/or higher-resolution simulations. We assess the accuracy of the numerical integration by comparing pairs of identical simulations run with and without DZS. Deviations in the lightcone halo mass function, in the sky-projected lightcone, and in the 3D matter lightcone always remain below 0.1%. In summary, our results indicate that the DZS technique may provide a highly-valuable tool to address the technical challenges that will characterise the next generation of large-scale cosmological simulations.
131 - S. Grandis 2015
In light of the growing number of cosmological observations, it is important to develop versatile tools to quantify the constraining power and consistency of cosmological probes. Originally motivated from information theory, we use the relative entro py to compute the information gained by Bayesian updates in units of bits. This measure quantifies both the improvement in precision and the surprise, i.e. the tension arising from shifts in central values. Our starting point is a WMAP9 prior which we update with observations of the distance ladder, supernovae (SNe), baryon acoustic oscillations (BAO), and weak lensing as well as the 2015 Planck release. We consider the parameters of the flat $Lambda$CDM concordance model and some of its extensions which include curvature and Dark Energy equation of state parameter $w$. We find that, relative to WMAP9 and within these model spaces, the probes that have provided the greatest gains are Planck (10 bits), followed by BAO surveys (5.1 bits) and SNe experiments (3.1 bits). The other cosmological probes, including weak lensing (1.7 bits) and {$rm H_0$} measures (1.7 bits), have contributed information but at a lower level. Furthermore, we do not find any significant surprise when updating the constraints of WMAP9 with any of the other experiments, meaning that they are consistent with WMAP9. However, when we choose Planck15 as the prior, we find that, accounting for the full multi-dimensionality of the parameter space, the weak lensing measurements of CFHTLenS produce a large surprise of 4.4 bits which is statistically significant at the 8 $sigma$ level. We discuss how the relative entropy provides a versatile and robust framework to compare cosmological probes in the context of current and future surveys.
We measure the clustering of DES Year 1 galaxies that are intended to be combined with weak lensing samples in order to produce precise cosmological constraints from the joint analysis of large-scale structure and lensing correlations. Two-point corr elation functions are measured for a sample of $6.6 times 10^{5}$ luminous red galaxies selected using the textsc{redMaGiC} algorithm over an area of $1321$ square degrees, in the redshift range $0.15 < z < 0.9$, split into five tomographic redshift bins. The sample has a mean redshift uncertainty of $sigma_{z}/(1+z) = 0.017$. We quantify and correct spurious correlations induced by spatially variable survey properties, testing their impact on the clustering measurements and covariance. We demonstrate the samples robustness by testing for stellar contamination, for potential biases that could arise from the systematic correction, and for the consistency between the two-point auto- and cross-correlation functions. We show that the corrections we apply have a significant impact on the resultant measurement of cosmological parameters, but that the results are robust against arbitrary choices in the correction method. We find the linear galaxy bias in each redshift bin in a fiducial cosmology to be $b(z$=$0.24)=1.40 pm 0.08$, $b(z$=$0.38)=1.61 pm 0.05$, $b(z$=$0.53)=1.60 pm 0.04$ for galaxies with luminosities $L/L_*>$$0.5$, $b(z$=$0.68)=1.93 pm 0.05$ for $L/L_*>$$1$ and $b(z$=$0.83)=1.99 pm 0.07$ for $L/L_*$$>1.5$, broadly consistent with expectations for the redshift and luminosity dependence of the bias of red galaxies. We show these measurements to be consistent with the linear bias obtained from tangential shear measurements.
The Euclid space telescope will measure the shapes and redshifts of galaxies to reconstruct the expansion history of the Universe and the growth of cosmic structures. Estimation of the expected performance of the experiment, in terms of predicted con straints on cosmological parameters, has so far relied on different methodologies and numerical implementations, developed for different observational probes and for their combination. In this paper we present validated forecasts, that combine both theoretical and observational expertise for different cosmological probes. This is presented to provide the community with reliable numerical codes and methods for Euclid cosmological forecasts. We describe in detail the methodology adopted for Fisher matrix forecasts, applied to galaxy clustering, weak lensing and their combination. We estimate the required accuracy for Euclid forecasts and outline a methodology for their development. We then compare and improve different numerical implementations, reaching uncertainties on the errors of cosmological parameters that are less than the required precision in all cases. Furthermore, we provide details on the validated implementations that can be used by the reader to validate their own codes if required. We present new cosmological forecasts for Euclid. We find that results depend on the specific cosmological model and remaining freedom in each setup, i.e. flat or non-flat spatial cosmologies, or different cuts at nonlinear scales. The validated numerical implementations can now be reliably used for any setup. We present results for an optimistic and a pessimistic choice of such settings. We demonstrate that the impact of cross-correlations is particularly relevant for models beyond a cosmological constant and may allow us to increase the dark energy Figure of Merit by at least a factor of three.
Recently, there have been two landmark discoveries of gravitationally lensed supernovae: the first multiply-imaged SN, Refsdal, and the first Type Ia SN resolved into multiple images, SN iPTF16geu. Fitting the multiple light curves of such objects ca n deliver measurements of the lensing time delays, which are the difference in arrival times for the separate images. These measurements provide precise tests of lens models or constraints on the Hubble constant and other cosmological parameters that are independent of the local distance ladder. Over the next decade, accurate time delay measurements will be needed for the tens to hundreds of lensed SNe to be found by wide-field time-domain surveys such as LSST and WFIRST. We have developed an open source software package for simulations and time delay measurements of multiply-imaged SNe, including an improved characterization of the uncertainty caused by microlensing. We describe simulations using the package that suggest a before-peak detection of the leading image enables a more accurate and precise time delay measurement (by ~1 and ~2 days, respectively), when compared to an after-peak detection. We also conclude that fitting the effects of microlensing without an accurate prior often leads to biases in the time delay measurement and over-fitting to the data, but that employing a Gaussian Process Regression (GPR) technique is sufficient for determining the uncertainty due to microlensing.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا