ترغب بنشر مسار تعليمي؟ اضغط هنا

Fast Weak Lensing Simulations with Halo Model

178   0   0.0 ( 0 )
 نشر من قبل Carlo Giocoli Dr
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Full ray-tracing maps of gravitational lensing, constructed from N-Body simulations, represent a fundamental tool to interpret present and future weak lensing data. However the limitation of computational resources and storage capabilities severely restrict the number of realizations that can be performed in order to accurately sample both the cosmic shear models and covariance matrices. In this paper we present a halo model formalism for weak gravitational lensing that alleviates these issues by producing weak-lensing mocks at a reduced computational cost. Our model takes as input the halo population within a desired light-cone and the linear power spectrum of the underlined cosmological model. We examine the contribution given by the presence of substructures within haloes to the cosmic shear power spectrum and quantify it to the percent level. Our method allows us to reconstruct high-resolution convergence maps, for any desired source redshifts, of light-cones that realistically trace the matter density distribution in the universe, account for masked area and sample selections. We compare our analysis on the same large scale structures constructed using ray-tracing techniques and find very good agreements both in the linear and non-linear regimes up to few percent levels. The accuracy and speed of our method demonstrate the potential of our halo model for weak lensing statistics and the possibility to generate a large sample of convergence maps for different cosmological models as needed for the analysis of large galaxy redshift surveys.



قيم البحث

اقرأ أيضاً

We investigate the accuracy of weak lensing simulations by comparing the results of five independently developed lensing simulation codes run on the same input $N$-body simulation. Our comparison focuses on the lensing convergence maps produced by th e codes, and in particular on the corresponding PDFs, power spectra and peak counts. We find that the convergence power spectra of the lensing codes agree to $lesssim 2%$ out to scales $ell approx 4000$. For lensing peak counts, the agreement is better than $5%$ for peaks with signal-to-noise $lesssim 6$. We also discuss the systematic errors due to the Born approximation, line-of-sight discretization, particle noise and smoothing. The lensing codes tested deal in markedly different ways with these effects, but they nonetheless display a satisfactory level of agreement. Our results thus suggest that systematic errors due to the operation of existing lensing codes should be small. Moreover their impact on the convergence power spectra for a lensing simulation can be predicted given its numerical details, which may then serve as a validation test.
Weak gravitational lensing measurements are traditionally made at optical wavelengths where many highly resolved galaxy images are readily available. However, the Square Kilometre Array (SKA) holds great promise for this type of measurement at radio wavelengths owing to its greatly increased sensitivity and resolution over typical radio surveys. The key to successful weak lensing experiments is in measuring the shapes of detected sources to high accuracy. In this document we describe a simulation pipeline designed to simulate radio images of the quality required for weak lensing, and will be typical of SKA observations. We provide as input, images with realistic galaxy shapes which are then simulated to produce images as they would have been observed with a given radio interferometer. We exploit this pipeline to investigate various stages of a weak lensing experiment in order to better understand the effects that may impact shape measurement. We first show how the proposed SKA1-Mid array configurations perform when we compare the (known) input and output ellipticities. We then investigate how making small changes to these array configurations impact on this input-outut ellipticity comparison. We also demonstrate how alternative configurations for SKA1-Mid that are smaller in extent, and with a faster survey speeds produce similar performance to those originally proposed. We then show how a notional SKA configuration performs in the same shape measurement challenge. Finally, we describe ongoing efforts to utilise our simulation pipeline to address questions relating to how applicable current (mostly originating from optical data) shape measurement techniques are to future radio surveys. As an alternative to such image plane techniques, we lastly discuss a shape measurement technique based on the shapelets formalism that reconstructs the source shapes directly from the visibility data.
The generation of simulated convergence maps is of key importance in fully exploiting weak lensing by Large Scale Structure (LSS) from which cosmological parameters can be derived. In this paper we present an extension of the PINOCCHIO code which pro duces catalogues of dark matter haloes so that it is capable of simulating weak lensing by LSS. Like WL-MOKA, the method starts with a random realisation of cosmological initial conditions, creates a halo catalogue and projects it onto the past-light-cone, and paints in haloes assuming parametric models for the mass density distribution within them. Large scale modes that are not accounted for by the haloes are constructed using linear theory. We discuss the systematic errors affecting the convergence power spectra when Lagrangian Perturbation Theory at increasing order is used to displace the haloes within PINOCCHIO, and how they depend on the grid resolution. Our approximate method is shown to be very fast when compared to full ray-tracing simulations from an N-Body run and able to recover the weak lensing signal, at different redshifts, with a few percent accuracy. It also allows for quickly constructing weak lensing covariance matrices, complementing PINOCCHIOs ability of generating the cluster mass function and galaxy clustering covariances and thus paving the way for calculating cross covariances between the different probes. This work advances these approximate methods as tools for simulating and analysing surveys data for cosmological purposes.
We present results from a set of simulations designed to constrain the weak lensing shear calibration for the Hyper Suprime-Cam (HSC) survey. These simulations include HSC observing conditions and galaxy images from the Hubble Space Telescope (HST), with fully realistic galaxy morphologies and the impact of nearby galaxies included. We find that the inclusion of nearby galaxies in the images is critical to reproducing the observed distributions of galaxy sizes and magnitudes, due to the non-negligible fraction of unrecognized blends in ground-based data, even with the excellent typical seeing of the HSC survey (0.58 in the $i$-band). Using these simulations, we detect and remove the impact of selection biases due to the correlation of weights and the quantities used to define the sample (S/N and apparent size) with the lensing shear. We quantify and remove galaxy property-dependent multiplicative and additive shear biases that are intrinsic to our shear estimation method, including a $sim 10$ per cent-level multiplicative bias due to the impact of nearby galaxies and unrecognized blends. Finally, we check the sensitivity of our shear calibration estimates to other cuts made on the simulated samples, and find that the changes in shear calibration are well within the requirements for HSC weak lensing analysis. Overall, the simulations suggest that the weak lensing multiplicative biases in the first-year HSC shear catalog are controlled at the 1 per cent level.
Upcoming weak lensing surveys will probe large fractions of the sky with unprecedented accuracy. To infer cosmological constraints, a large ensemble of survey simulations are required to accurately model cosmological observables and their covariances . We develop a parallelized multi-lens-plane pipeline called UFalcon, designed to generate full-sky weak lensing maps from lightcones within a minimal runtime. It makes use of L-PICOLA, an approximate numerical code, which provides a fast and accurate alternative to cosmological $N$-Body simulations. The UFalcon maps are constructed by nesting 2 simulations covering a redshift-range from $z=0.1$ to $1.5$ without replicating the simulation volume. We compute the convergence and projected overdensity maps for L-PICOLA in the lightcone or snapshot mode. The generation of such a map, including the L-PICOLA simulation, takes about 3 hours walltime on 220 cores. We use the maps to calculate the spherical harmonic power spectra, which we compare to theoretical predictions and to UFalcon results generated using the full $N$-Body code GADGET-2. We then compute the covariance matrix of the full-sky spherical harmonic power spectra using 150 UFalcon maps based on L-PICOLA in lightcone mode. We consider the PDF, the higher-order moments and the variance of the smoothed field variance to quantify the accuracy of the covariance matrix, which we find to be a few percent for scales $ell sim 10^2$ to $10^3$. We test the impact of this level of accuracy on cosmological constraints using an optimistic survey configuration, and find that the final results are robust to this level of uncertainty. The speed and accuracy of our developed pipeline provides a basis to also include further important features such as masking, varying noise and will allow us to compute covariance matrices for models beyond $Lambda$CDM. [abridged]
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا