ترغب بنشر مسار تعليمي؟ اضغط هنا

Prospects for weak lensing/cosmic shear with VLTs

43   0   0.0 ( 0 )
 نشر من قبل Yannick Mellier
 تاريخ النشر 2002
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Y. Mellier




اسأل ChatGPT حول البحث

The present status of weak lensing analyses of clusters of galaxies and of cosmic shear surveys are presented and discussed. We focus on the impact of very large telescopes on present-day and future surveys and compare their potential with HST or wide field 4 meter telescopes.



قيم البحث

اقرأ أيضاً

The complete 10-year survey from the Large Synoptic Survey Telescope (LSST) will image $sim$ 20,000 square degrees of sky in six filter bands every few nights, bringing the final survey depth to $rsim27.5$, with over 4 billion well measured galaxies. To take full advantage of this unprecedented statistical power, the systematic errors associated with weak lensing measurements need to be controlled to a level similar to the statistical errors. This work is the first attempt to quantitatively estimate the absolute level and statistical properties of the systematic errors on weak lensing shear measurements due to the most important physical effects in the LSST system via high fidelity ray-tracing simulations. We identify and isolate the different sources of algorithm-independent, textit{additive} systematic errors on shear measurements for LSST and predict their impact on the final cosmic shear measurements using conventional weak lensing analysis techniques. We find that the main source of the errors comes from an inability to adequately characterise the atmospheric point spread function (PSF) due to its high frequency spatial variation on angular scales smaller than $sim10$ in the single short exposures, which propagates into a spurious shear correlation function at the $10^{-4}$--$10^{-3}$ level on these scales. With the large multi-epoch dataset that will be acquired by LSST, the stochastic errors average out, bringing the final spurious shear correlation function to a level very close to the statistical errors. Our results imply that the cosmological constraints from LSST will not be severely limited by these algorithm-independent, additive systematic effects.
3D data compression techniques can be used to determine the natural basis of radial eigenmodes that encode the maximum amount of information in a tomographic large-scale structure survey. We explore the potential of the Karhunen-Lo`eve decomposition in reducing the dimensionality of the data vector for cosmic shear measurements, and apply it to the final data from the cfh survey. We find that practically all of the cosmological information can be encoded in one single radial eigenmode, from which we are able to reproduce compatible constraints with those found in the fiducial tomographic analysis (done with 7 redshift bins) with a factor of ~30 fewer datapoints. This simplifies the problem of computing the two-point function covariance matrix from mock catalogues by the same factor, or by a factor of ~800 for an analytical covariance. The resulting set of radial eigenfunctions is close to ell-independent, and therefore they can be used as redshift-dependent galaxy weights. This simplifies the application of the Karhunen-Lo`eve decomposition to real-space and Fourier-space data, and allows one to explore the effective radial window function of the principal eigenmodes as well as the associated shear maps in order to identify potential systematics. We also apply the method to extended parameter spaces and verify that additional information may be gained by including a second mode to break parameter degeneracies. The data and analysis code are publicly available at https://github.com/emiliobellini/kl_sample.
Metacalibration is a recently introduced method to accurately measure weak gravitational lensing shear using only the available imaging data, without need for prior information about galaxy properties or calibration from simulations. The method invol ves distorting the image with a small known shear, and calculating the response of a shear estimator to that applied shear. The method was shown to be accurate in moderate sized simulations with galaxy images that had relatively high signal-to-noise ratios, and without significant selection effects. In this work we introduce a formalism to correct for both shear response and selection biases. We also observe that, for images with relatively low signal-to-noise ratios, the correlated noise that arises during the metacalibration process results in significant bias, for which we develop a simple empirical correction. To test this formalism, we created large image simulations based on both parametric models and real galaxy images, including tests with realistic point-spread functions. We varied the point-spread function ellipticity at the five percent level. In each simulation we applied a small, few percent shear to the galaxy images. We introduced additional challenges that arise in real data, such as detection thresholds, stellar contamination, and missing data. We applied cuts on the measured galaxy properties to induce significant selection effects. Using our formalism, we recovered the input shear with an accuracy better than a part in a thousand in all cases.
Highly precise weak lensing shear measurement is required for statistical weak gravitational lensing analysis such as cosmic shear measurement to achieve severe constraint on the cosmological parameters. For this purpose, the accurate shape measureme nt of background galaxies is absolutely important in which any systematic error in the measurement should be carefully corrected. One of the main systematic error comes from photon noise which is Poisson noise of flux from the atmosphere(noise bias). We investigate how the photon noise makes a systematic error in shear measurement within the framework of ERA method we developed in earlier papers and gives a practical correction method. The method is tested by simulations with real galaxy images with various conditions and it is confirmed that it can correct $80 sim 90%$ of the noise bias except for galaxies with very low signal to noise ratio.
We present results from a set of simulations designed to constrain the weak lensing shear calibration for the Hyper Suprime-Cam (HSC) survey. These simulations include HSC observing conditions and galaxy images from the Hubble Space Telescope (HST), with fully realistic galaxy morphologies and the impact of nearby galaxies included. We find that the inclusion of nearby galaxies in the images is critical to reproducing the observed distributions of galaxy sizes and magnitudes, due to the non-negligible fraction of unrecognized blends in ground-based data, even with the excellent typical seeing of the HSC survey (0.58 in the $i$-band). Using these simulations, we detect and remove the impact of selection biases due to the correlation of weights and the quantities used to define the sample (S/N and apparent size) with the lensing shear. We quantify and remove galaxy property-dependent multiplicative and additive shear biases that are intrinsic to our shear estimation method, including a $sim 10$ per cent-level multiplicative bias due to the impact of nearby galaxies and unrecognized blends. Finally, we check the sensitivity of our shear calibration estimates to other cuts made on the simulated samples, and find that the changes in shear calibration are well within the requirements for HSC weak lensing analysis. Overall, the simulations suggest that the weak lensing multiplicative biases in the first-year HSC shear catalog are controlled at the 1 per cent level.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا