Do you want to publish a course? Click here

Density split statistics: joint model of counts and lensing in cells

69   0   0.0 ( 0 )
 Added by Oliver Friedrich
 Publication date 2017
  fields Physics
and research's language is English




Ask ChatGPT about the research

We present density split statistics, a framework that studies lensing and counts-in-cells as a function of foreground galaxy density, thereby providing a large-scale measurement of both 2-point and 3-point statistics. Our method extends our earlier work on trough lensing and is summarized as follows: given a foreground (low redshift) population of galaxies, we divide the sky into subareas of equal size but distinct galaxy density. We then measure lensing around uniformly spaced points separately in each of these subareas, as well as counts-in-cells statistics (CiC). The lensing signals trace the matter density contrast around regions of fixed galaxy density. Through the CiC measurements this can be related to the density profile around regions of fixed matter density. Together, these measurements constitute a powerful probe of cosmology, the skewness of the density field and the connection of galaxies and matter. In this paper we show how to model both the density split lensing signal and CiC from basic ingredients: a non-linear power spectrum, clustering hierarchy coefficients from perturbation theory and a parametric model for galaxy bias and shot-noise. Using N-body simulations, we demonstrate that this model is sufficiently accurate for a cosmological analysis on year 1 data from the Dark Energy Survey.



rate research

Read More

We derive cosmological constraints from the probability distribution function (PDF) of evolved large-scale matter density fluctuations. We do this by splitting lines of sight by density based on their count of tracer galaxies, and by measuring both gravitational shear around and counts-in-cells in overdense and underdense lines of sight, in Dark Energy Survey (DES) First Year and Sloan Digital Sky Survey (SDSS) data. Our analysis uses a perturbation theory model (see companion paper Friedrich at al.) and is validated using N-body simulation realizations and log-normal mocks. It allows us to constrain cosmology, bias and stochasticity of galaxies w.r.t. matter density and, in addition, the skewness of the matter density field. From a Bayesian model comparison, we find that the data weakly prefer a connection of galaxies and matter that is stochastic beyond Poisson fluctuations on <=20 arcmin angular smoothing scale. The two stochasticity models we fit yield DES constraints on the matter density $Omega_m=0.26^{+0.04}_{-0.03}$ and $Omega_m=0.28^{+0.05}_{-0.04}$ that are consistent with each other. These values also agree with the DES analysis of galaxy and shear two-point functions (3x2pt) that only uses second moments of the PDF. Constraints on $sigma_8$ are model dependent ($sigma_8=0.97^{+0.07}_{-0.06}$ and $0.80^{+0.06}_{-0.07}$ for the two stochasticity models), but consistent with each other and with the 3x2pt results if stochasticity is at the low end of the posterior range. As an additional test of gravity, counts and lensing in cells allow to compare the skewness $S_3$ of the matter density PDF to its LCDM prediction. We find no evidence of excess skewness in any model or data set, with better than 25 per cent relative precision in the skewness estimate from DES alone.
The statistics of peaks in weak lensing convergence maps is a promising tool to investigate both the properties of dark matter haloes and constrain the cosmological parameters. We study how the number of detectable peaks and its scaling with redshift depend upon the cluster dark matter halo profiles and use peak statistics to constrain the parameters of the mass - concentration (MC) relation. We investigate which constraints the Euclid mission can set on the MC coefficients also taking into account degeneracies with the cosmological parameters. To this end, we first estimate the number of peaks and its redshift distribution for different MC relations. We find that the steeper the mass dependence and the larger the normalisation, the higher is the number of detectable clusters, with the total number of peaks changing up to $40%$ depending on the MC relation. We then perform a Fisher matrix forecast of the errors on the MC relation parameters as well as cosmological parameters. We find that peak number counts detected by Euclid can determine the normalization $A_v$, the mass $B_v$ and redshift $C_v$ slopes and intrinsic scatter $sigma_v$ of the MC relation to an unprecedented accuracy being $sigma(A_v)/A_v = 1%$, $sigma(B_v)/B_v = 4%$, $sigma(C_v)/C_v = 9%$, $sigma(sigma_v)/sigma_v = 1%$ if all cosmological parameters are assumed to be known. Should we relax this severe assumption, constraints are degraded, but remarkably good results can be restored setting only some of the parameters or combining peak counts with Planck data. This precision can give insight on competing scenarios of structure formation and evolution and on the role of baryons in cluster assembling. Alternatively, for a fixed MC relation, future peaks counts can perform as well as current BAO and SNeIa when combined with Planck.
We present constraints on the cosmological constant lambda_0 and the density parameter Omega_0 from joint constraints from the analyses of gravitational lensing statistics of the Jodrell Bank-VLA Astrometric Survey (JVAS), optical gravitational lens surveys from the literature and CMB anisotropies. This is the first time that quantitative joint constraints involving lensing statistics and CMB anisotropies have been presented. Within the assumptions made, we achieve very tight constraints on both lambda_0 and Omega_0. These assumptions are cold dark matter models, no tensor components, no reionisation, CMB temperature T_CMB=2.728, number of neutrinos n_nu=3, helium abundance Y_He=0.246, spectral index n_s=1.0, Hubble constant H_0=68km/s/Mpc, baryonic density Omega_b=0.05. All models were normalised to the COBE data and no closed models (k=+1) were computed. Using the CMB data alone, the best-fit model has lambda_0=0.60 and Omega_0=0.34 and at 99% confidence the lower limit on lambda_0+Omega_0 is 0.8. Including constraints from gravitational lensing statistics doesnt change this significantly, although it does change the allowed region of parameter space. A universe with lambda_0=0 is ruled out for any value of Omega_0 at better than 99% confidence using the CMB alone. Combined with constraints from lensing statistics, lambda_0=0 is also ruled out at better than 99% confidence. As the region of parameter space allowed by the CMB is, within our assumptions, much smaller than that allowed by lensing statistics, the main result of combining the two is to change the range of parameter space allowed by the CMB along its axis of degeneracy.
We propose counting peaks in weak lensing (WL) maps, as a function of their height, to probe models of dark energy and to constrain cosmological parameters. Because peaks can be identified in two-dimensional WL maps directly, they can provide constraints that are free from potential selection effects and biases involved in identifying and determining the masses of galaxy clusters. We have run cosmological N-body simulations to produce WL convergence maps in three models with different constant values of the dark energy equation of state parameter, w=-0.8, -1, and -1.2, with a fixed normalization of the primordial power spectrum (corresponding to present-day normalizations of sigma8=0.742, 0.798, and 0.839, respectively). By comparing the number of WL peaks in 8 convergence bins in the range of -0.1 < kappa < 0.2, in multiple realizations of a single simulated 3x3 degree field, we show that the first (last) pair of models can be distinguished at the 95% (85%) confidence level. A survey with depth and area (20,000 sq. degrees), comparable to those expected from LSST, should have a factor of approx. 50 better parameter sensitivity. We find that relatively low-amplitude peaks (kappa = 0.03), which typically do not correspond to a single collapsed halo along the line of sight, account for most of this sensitivity. We study a range of smoothing scales and source galaxy redshifts (z_s). With a fixed source galaxy density of 15/arcmin^2, the best results are provided by the smallest scale we can reliably simulate, 1 arcminute, and z_s=2 provides substantially better sensitivity than z_s< 1.5.
This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological information extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density $Omega_mathrm{m}$, the power spectrum normalisation $sigma_8$, and the dark-energy parameter $w_0$. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the $Omega_mathrm{m}$-$sigma_8$ degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا