ترغب بنشر مسار تعليمي؟ اضغط هنا

Optimal PSF modeling for weak lensing: complexity and sparsity

106   0   0.0 ( 0 )
 نشر من قبل Paulin-Henriksson Stephane
 تاريخ النشر 2009
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We investigate the impact of point spread function (PSF) fitting errors on cosmic shear measurements using the concepts of complexity and sparsity. Complexity, introduced in a previous paper, characterizes the number of degrees of freedom of the PSF. For instance, fitting an underlying PSF with a model with low complexity will lead to small statistical errors on the model parameters, however these parameters could suffer from large biases. Alternatively, fitting with a large number of parameters will tend to reduce biases at the expense of statistical errors. We perform an optimisation of scatters and biases by studying the mean squared error of a PSF model. We also characterize a model sparsity, which describes how efficiently the model is able to represent the underlying PSF using a limited number of free parameters. We present the general case and illustrate it for a realistic example of PSF fitted with shapelet basis sets. We derive the relation between complexity and sparsity of the PSF model, signal-to-noise ratio of stars and systematic errors on cosmological parameters. With the constraint of maintaining the systematics below the statistical uncertainties, this lead to a relation between the required number of stars to calibrate the PSF and the sparsity. We discuss the impact of our results for current and future cosmic shear surveys. In the typical case where the biases can be represented as a power law of the complexity, we show that current weak lensing surveys can calibrate the PSF with few stars, while future surveys will require hard constraints on the sparsity in order to calibrate the PSF with 50 stars.

قيم البحث

اقرأ أيضاً

A main science goal for the Large Synoptic Survey Telescope (LSST) is to measure the cosmic shear signal from weak lensing to extreme accuracy. One difficulty, however, is that with the short exposure time ($simeq$15 seconds) proposed, the spatial va riation of the Point Spread Function (PSF) shapes may be dominated by the atmosphere, in addition to optics errors. While optics errors mainly cause the PSF to vary on angular scales similar or larger than a single CCD sensor, the atmosphere generates stochastic structures on a wide range of angular scales. It thus becomes a challenge to infer the multi-scale, complex atmospheric PSF patterns by interpolating the sparsely sampled stars in the field. In this paper we present a new method, PSFent, for interpolating the PSF shape parameters, based on reconstructing underlying shape parameter maps with a multi-scale maximum entropy algorithm. We demonstrate, using images from the LSST Photon Simulator, the performance of our approach relative to a 5th-order polynomial fit (representing the current standard) and a simple boxcar smoothing technique. Quantitatively, PSFent predicts more accurate PSF models in all scenarios and the residual PSF errors are spatially less correlated. This improvement in PSF interpolation leads to a factor of 3.5 lower systematic errors in the shear power spectrum on scales smaller than $sim13$, compared to polynomial fitting. We estimate that with PSFent and for stellar densities greater than $simeq1/{rm arcmin}^{2}$, the spurious shear correlation from PSF interpolation, after combining a complete 10-year dataset from LSST, is lower than the corresponding statistical uncertainties on the cosmic shear power spectrum, even under a conservative scenario.
325 - Rachel Mandelbaum 2015
We present a pedagogical review of the weak gravitational lensing measurement process and its connection to major scientific questions such as dark matter and dark energy. Then we describe common ways of parametrizing systematic errors and understand ing how they affect weak lensing measurements. Finally, we discuss several instrumental systematics and how they fit into this context, and conclude with some future perspective on how progress can be made in understanding the impact of instrumental systematics on weak lensing measurements.
Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point spread function (PSF) of the imaging system with extreme accuracy. Future WL missions (such as the Wide-Field Infrared Survey Telescope, WFIRST) will use a family of hybrid nearinfrared CMOS detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size and the absolute error in the PSF ellipticity as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of 0.01 (fractional size) and 0.00175 (absolute ellipticity error) in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of the size and ellipticity errors in the H158 band to approximately 10% of the estimated WFIRST error budget, the parameter of our quadratic NL model must be calibrated to about 1% and 2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.
This is the third paper on the improvements of systematic errors in our weak lensing analysis using an elliptical weight function, called E-HOLICs. In the previous papers we have succeeded in avoiding error which depends on ellipticity of background image. In this paper, we investigate the systematic error which depends on signal to noise ratio of background image. We find that the origin of the error is the random count noise which comes from Poisson noise of sky counts. Random count noise makes additional moments and centroid shift error, and those 1st orders are canceled in averaging, but 2nd orders are not canceled. We derived the equations which corrects these effects in measuring moments and ellipticity of the image and test their validity using simulation image. We find that the systematic error becomes less than 1% in the measured ellipticity for objects with $S/N>3$.
We introduce a novel approach to reconstruct dark matter mass maps from weak gravitational lensing measurements. The cornerstone of the proposed method lies in a new modelling of the matter density field in the Universe as a mixture of two components :(1) a sparsity-based component that captures the non-Gaussian structure of the field, such as peaks or halos at different spatial scales; and (2) a Gaussian random field, which is known to well represent the linear characteristics of the field.Methods. We propose an algorithm called MCALens which jointly estimates these two components. MCAlens is based on an alternating minimization incorporating both sparse recovery and a proximal iterative Wiener filtering. Experimental results on simulated data show that the proposed method exhibits improved estimation accuracy compared to state-of-the-art mass map reconstruction methods.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا