Do you want to publish a course? Click here

Weak Lensing Study in VOICE Survey I: Shear Measurement

132   0   0.0 ( 0 )
 Added by Dezi Liu
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is a Guaranteed Time program carried out with the ESO/VST telescope to provide deep optical imaging over two 4 deg$^2$ patches of the sky centred on the CDFS and ES1 pointings. We present the cosmic shear measurement over the 4 deg$^2$ covering the CDFS region in the $r$-band using LensFit. Each of the four tiles of 1 deg$^2$ has more than one hundred exposures, of which more than 50 exposures passed a series of image quality selection criteria for weak lensing study. The $5sigma$ limiting magnitude in $r$- band is 26.1 for point sources, which is $sim$1 mag deeper than other weak lensing survey in the literature (e.g. the Kilo Degree Survey, KiDS, at VST). The photometric redshifts are estimated using the VOICE $u,g,r,i$ together with near-infrared VIDEO data $Y,J,H,K_s$. The mean redshift of the shear catalogue is 0.87, considering the shear weight. The effective galaxy number density is 16.35 gal/arcmin$^2$, which is nearly twice the one of KiDS. The performance of LensFit on such a deep dataset was calibrated using VOICE-like mock image simulations. Furthermore, we have analyzed the reliability of the shear catalogue by calculating the star-galaxy cross-correlations, the tomographic shear correlations of two redshift bins and the contaminations of the blended galaxies. As a further sanity check, we have constrained cosmological parameters by exploring the parameter space with Population Monte Carlo sampling. For a flat $Lambda$CDM model we have obtained $Sigma_8$ = $sigma_8(Omega_m/0.3)^{0.5}$ = $0.68^{+0.11}_{-0.15}$.



rate research

Read More

The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey is proposed to obtain deep optical $ugri$ imaging of the CDFS and ES1 fields using the VLT Survey Telescope (VST). At present, the observations for the CDFS field have been completed, and comprise in total about 4.9 deg$^2$ down to $r_mathrm{AB}$$sim$26 mag. In the companion paper by Fu et al. (2018), we present the weak lensing shear measurements for $r$-band images with seeing $le$ 0.9 arcsec. In this paper, we perform image simulations to calibrate possible biases of the measured shear signals. Statistically, the properties of the simulated point spread function (PSF) and galaxies show good agreements with those of observations. The multiplicative bias is calibrated to reach an accuracy of $sim$3.0%. We study the bias sensitivities to the undetected faint galaxies and to the neighboring galaxies. We find that undetected galaxies contribute to the multiplicative bias at the level of $sim$0.3%. Further analysis shows that galaxies with lower signal-to-noise ratio (SNR) are impacted more significantly because the undetected galaxies skew the background noise distribution. For the neighboring galaxies, we find that although most have been rejected in the shape measurement procedure, about one third of them still remain in the final shear sample. They show a larger ellipticity dispersion and contribute to $sim$0.2% of the multiplicative bias. Such a bias can be removed by further eliminating these neighboring galaxies. But the effective number density of the galaxies can be reduced considerably. Therefore efficient methods should be developed for future weak lensing deep surveys.
Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method involves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that, for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five percent level. In each simulation we applied a small, few percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.
We present results from a set of simulations designed to constrain the weak lensing shear calibration for the Hyper Suprime-Cam (HSC) survey. These simulations include HSC observing conditions and galaxy images from the Hubble Space Telescope (HST), with fully realistic galaxy morphologies and the impact of nearby galaxies included. We find that the inclusion of nearby galaxies in the images is critical to reproducing the observed distributions of galaxy sizes and magnitudes, due to the non-negligible fraction of unrecognized blends in ground-based data, even with the excellent typical seeing of the HSC survey (0.58 in the $i$-band). Using these simulations, we detect and remove the impact of selection biases due to the correlation of weights and the quantities used to define the sample (S/N and apparent size) with the lensing shear. We quantify and remove galaxy property-dependent multiplicative and additive shear biases that are intrinsic to our shear estimation method, including a $sim 10$ per cent-level multiplicative bias due to the impact of nearby galaxies and unrecognized blends. Finally, we check the sensitivity of our shear calibration estimates to other cuts made on the simulated samples, and find that the changes in shear calibration are well within the requirements for HSC weak lensing analysis. Overall, the simulations suggest that the weak lensing multiplicative biases in the first-year HSC shear catalog are controlled at the 1 per cent level.
The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image $sim$ 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to $rsim27.5$, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. We identify and isolate the different sources of algorithm-independent, textit{additive} systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than $sim10$ in the single short exposures, which propagates into a spurious shear correlation function at the $10^{-4}$--$10^{-3}$ level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects.
3D data compression techniques can be used to determine the natural basis of radial eigenmodes that encode the maximum amount of information in a tomographic large-scale structure survey. We explore the potential of the Karhunen-Lo`eve decomposition in reducing the dimensionality of the data vector for cosmic shear measurements, and apply it to the final data from the cfh survey. We find that practically all of the cosmological information can be encoded in one single radial eigenmode, from which we are able to reproduce compatible constraints with those found in the fiducial tomographic analysis (done with 7 redshift bins) with a factor of ~30 fewer datapoints. This simplifies the problem of computing the two-point function covariance matrix from mock catalogues by the same factor, or by a factor of ~800 for an analytical covariance. The resulting set of radial eigenfunctions is close to ell-independent, and therefore they can be used as redshift-dependent galaxy weights. This simplifies the application of the Karhunen-Lo`eve decomposition to real-space and Fourier-space data, and allows one to explore the effective radial window function of the principal eigenmodes as well as the associated shear maps in order to identify potential systematics. We also apply the method to extended parameter spaces and verify that additional information may be gained by including a second mode to break parameter degeneracies. The data and analysis code are publicly available at https://github.com/emiliobellini/kl_sample.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا