ترغب بنشر مسار تعليمي؟ اضغط هنا

An analytic approach to number counts of weak-lensing peak detections

275   0   0.0 ( 0 )
 نشر من قبل Matteo Maturi
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We develop and apply an analytic method to predict peak counts in weak-lensing surveys. It is based on the theory of Gaussian random fields and suitable to quantify the level of spurious detections caused by chance projections of large-scale structures as well as the shape and shot noise contributed by the background galaxies. We compare our method to peak counts obtained from numerical ray-tracing simulations and find good agreement at the expected level. The number of peak detections depends substantially on the shape and size of the filter applied to the gravitational shear field. Our main results are that weak-lensing peak counts are dominated by spurious detections up to signal-to-noise ratios of 3--5 and that most filters yield only a few detections per square degree above this level, while a filter optimised for suppressing large-scale structure noise returns up to an order of magnitude more.



قيم البحث

اقرأ أيضاً

We propose counting peaks in weak lensing (WL) maps, as a function of their height, to probe models of dark energy and to constrain cosmological parameters. Because peaks can be identified in two-dimensional WL maps directly, they can provide constra ints that are free from potential selection effects and biases involved in identifying and determining the masses of galaxy clusters. We have run cosmological N-body simulations to produce WL convergence maps in three models with different constant values of the dark energy equation of state parameter, w=-0.8, -1, and -1.2, with a fixed normalization of the primordial power spectrum (corresponding to present-day normalizations of sigma8=0.742, 0.798, and 0.839, respectively). By comparing the number of WL peaks in 8 convergence bins in the range of -0.1 < kappa < 0.2, in multiple realizations of a single simulated 3x3 degree field, we show that the first (last) pair of models can be distinguished at the 95% (85%) confidence level. A survey with depth and area (20,000 sq. degrees), comparable to those expected from LSST, should have a factor of approx. 50 better parameter sensitivity. We find that relatively low-amplitude peaks (kappa = 0.03), which typically do not correspond to a single collapsed halo along the line of sight, account for most of this sensitivity. We study a range of smoothing scales and source galaxy redshifts (z_s). With a fixed source galaxy density of 15/arcmin^2, the best results are provided by the smallest scale we can reliably simulate, 1 arcminute, and z_s=2 provides substantially better sensitivity than z_s< 1.5.
The statistics of peaks in weak lensing convergence maps is a promising tool to investigate both the properties of dark matter haloes and constrain the cosmological parameters. We study how the number of detectable peaks and its scaling with redshift depend upon the cluster dark matter halo profiles and use peak statistics to constrain the parameters of the mass - concentration (MC) relation. We investigate which constraints the Euclid mission can set on the MC coefficients also taking into account degeneracies with the cosmological parameters. To this end, we first estimate the number of peaks and its redshift distribution for different MC relations. We find that the steeper the mass dependence and the larger the normalisation, the higher is the number of detectable clusters, with the total number of peaks changing up to $40%$ depending on the MC relation. We then perform a Fisher matrix forecast of the errors on the MC relation parameters as well as cosmological parameters. We find that peak number counts detected by Euclid can determine the normalization $A_v$, the mass $B_v$ and redshift $C_v$ slopes and intrinsic scatter $sigma_v$ of the MC relation to an unprecedented accuracy being $sigma(A_v)/A_v = 1%$, $sigma(B_v)/B_v = 4%$, $sigma(C_v)/C_v = 9%$, $sigma(sigma_v)/sigma_v = 1%$ if all cosmological parameters are assumed to be known. Should we relax this severe assumption, constraints are degraded, but remarkably good results can be restored setting only some of the parameters or combining peak counts with Planck data. This precision can give insight on competing scenarios of structure formation and evolution and on the role of baryons in cluster assembling. Alternatively, for a fixed MC relation, future peaks counts can perform as well as current BAO and SNeIa when combined with Planck.
Massive neutrinos influence the background evolution of the Universe as well as the growth of structure. Being able to model this effect and constrain the sum of their masses is one of the key challenges in modern cosmology. Weak-lensing cosmological constraints will also soon reach higher levels of precision with next-generation surveys like LSST, WFIRST and Euclid. We use the MassiveNus simulations to derive constraints on the sum of neutrino masses $M_{ u}$, the present-day total matter density $Omega_{rm m}$, and the primordial power spectrum normalization $A_{rm s}$ in a tomographic setting. We measure the lensing power spectrum as second-order statistics along with peak counts as higher-order statistics on lensing convergence maps generated from the simulations. We investigate the impact of multiscale filtering approaches on cosmological parameters by employing a starlet (wavelet) filter and a concatenation of Gaussian filters. In both cases peak counts perform better than the power spectrum on the set of parameters [$M_{ u}$, $Omega_{rm m}$, $A_{rm s}$] respectively by 63$%$, 40$%$ and 72$%$ when using a starlet filter and by 70$%$, 40$%$ and 77$%$ when using a multiscale Gaussian. More importantly, we show that when using a multiscale approach, joining power spectrum and peaks does not add any relevant information over considering just the peaks alone. While both multiscale filters behave similarly, we find that with the starlet filter the majority of the information in the data covariance matrix is encoded in the diagonal elements; this can be an advantage when inverting the matrix, speeding up the numerical implementation.
This is the third in a series of papers that develop a new and flexible model to predict weak-lensing (WL) peak counts, which have been shown to be a very valuable non-Gaussian probe of cosmology. In this paper, we compare the cosmological informatio n extracted from WL peak counts using different filtering techniques of the galaxy shear data, including linear filtering with a Gaussian and two compensated filters (the starlet wavelet and the aperture mass), and the nonlinear filtering method MRLens. We present improvements to our model that account for realistic survey conditions, which are masks, shear-to-convergence transformations, and non-constant noise. We create simulated peak counts from our stochastic model, from which we obtain constraints on the matter density $Omega_mathrm{m}$, the power spectrum normalisation $sigma_8$, and the dark-energy parameter $w_0$. We use two methods for parameter inference, a copula likelihood, and approximate Bayesian computation (ABC). We measure the contour width in the $Omega_mathrm{m}$-$sigma_8$ degeneracy direction and the figure of merit to compare parameter constraints from different filtering techniques. We find that starlet filtering outperforms the Gaussian kernel, and that including peak counts from different smoothing scales helps to lift parameter degeneracies. Peak counts from different smoothing scales with a compensated filter show very little cross-correlation, and adding information from different scales can therefore strongly enhance the available information. Measuring peak counts separately from different scales yields tighter constraints than using a combined peak histogram from a single map that includes multiscale information. Our results suggest that a compensated filter function with counts included separately from different smoothing scales yields the tightest constraints on cosmological parameters from WL peaks.
349 - Alina Sabyr 2021
In order to extract full cosmological information from next-generation large and high-precision weak lensing (WL) surveys (e.g. Euclid, Roman, LSST), higher-order statistics that probe the small-scale, non-linear regime of large scale structure (LSS) need to be utilized. WL peak counts, which trace overdensities in the cosmic web, are one promising and simple statistic for constraining cosmological parameters. The physical origin of WL peaks have previously been linked to dark matter halos along the line of sight and this peak-halo connection has been used to develop various semi-analytic halo-based models for predicting peak counts. Here, we study the origin of WL peaks and the effectiveness of halo-based models for WL peak counts using a suite of ray-tracing N-body simulations. We compare WL peaks in convergence maps from the full simulations to those in maps created from only particles associated with halos -- the latter playing the role of a perfect halo model. We find that while halo-only contributions are able to replicate peak counts qualitatively well, halos do not explain all WL peaks. Halos particularly underpredict negative peaks, which are associated with local overdensities in large-scale underdense regions along the line of sight. In addition, neglecting non-halo contributions to peaks counts leads to a significant bias on the parameters ($Omega_{rm m}$, $sigma_{8}$) for surveys larger than $geq$ 100 deg$^{2}$. We conclude that other elements of the cosmic web, outside and far away from dark matter halos, need to be incorporated into models of WL peaks in order to infer unbiased cosmological constraints.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا