ترغب بنشر مسار تعليمي؟ اضغط هنا

Gravitational Lensing Accuracy Testing 2010 (GREAT10) Challenge Handbook

165   0   0.0 ( 0 )
 نشر من قبل Thomas Kitching
 تاريخ النشر 2010
والبحث باللغة English




اسأل ChatGPT حول البحث

GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The Galaxy Challenge involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the Star Challenge that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from http://www.greatchallenges.info.



قيم البحث

اقرأ أيضاً

The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure w eak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include realistically complex galaxy models based on high-resolution imaging from space; spatially varying, physically-motivated blurring kernel; and combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information.
The GRavitational lEnsing Accuracy Testing 2008 (GREAT08) Challenge focuses on a problem that is of crucial importance for future observations in cosmology. The shapes of distant galaxies can be used to determine the properties of dark energy and the nature of gravity, because light from those galaxies is bent by gravity from the intervening dark matter. The observed galaxy images appear distorted, although only slightly, and their shapes must be precisely disentangled from the effects of pixelisation, convolution and noise. The worldwide gravitational lensing community has made significant progress in techniques to measure these distortions via the Shear TEsting Program (STEP). Via STEP, we have run challenges within our own community, and come to recognise that this particular image analysis problem is ideally matched to experts in statistical inference, inverse problems and computational learning. Thus, in order to continue the progress seen in recent years, we are seeking an infusion of new ideas from these communities. This document details the GREAT08 Challenge for potential participants. Please visit http://www.great08challenge.info for the latest information.
In this paper we present results from the weak lensing shape measurement GRavitational lEnsing Accuracy Testing 2010 (GREAT10) Galaxy Challenge. This marks an order of magnitude step change in the level of scrutiny employed in weak lensing shape meas urement analysis. We provide descriptions of each method tested and include 10 evaluation metrics over 24 simulation branches. GREAT10 was the first shape measurement challenge to include variable fields; both the shear field and the Point Spread Function (PSF) vary across the images in a realistic manner. The variable fields enable a variety of metrics that are inaccessible to constant shear simulations including a direct measure of the impact of shape measurement inaccuracies, and the impact of PSF size and ellipticity, on the shear power spectrum. To assess the impact of shape measurement bias for cosmic shear we present a general pseudo-Cl formalism, that propagates spatially varying systematics in cosmic shear through to power spectrum estimates. We also show how one-point estimators of bias can be extracted from variable shear simulations. The GREAT10 Galaxy Challenge received 95 submissions and saw a factor of 3 improvement in the accuracy achieved by shape measurement methods. The best methods achieve sub-percent average biases. We find a strong dependence in accuracy as a function of signal-to-noise, and indications of a weak dependence on galaxy type and size. Some requirements for the most ambitious cosmic shear experiments are met above a signal-to-noise ratio of 20. These results have the caveat that the simulated PSF was a ground-based PSF. Our results are a snapshot of the accuracy of current shape measurement methods and are a benchmark upon which improvement can continue. This provides a foundation for a better understanding of the strengths and limitations of shape measurement methods.
Here we introduce PHAT, the PHoto-z Accuracy Testing programme, an international initiative to test and compare different methods of photo-z estimation. Two different test environments are set up, one (PHAT0) based on simulations to test the basic fu nctionality of the different photo-z codes, and another one (PHAT1) based on data from the GOODS survey. The accuracy of the different methods is expressed and ranked by the global photo-z bias, scatter, and outlier rates. Most methods agree well on PHAT0 but produce photo-z scatters that can differ by up to a factor of two even in this idealised case. A larger spread in accuracy is found for PHAT1. Few methods benefit from the addition of mid-IR photometry. Remaining biases and systematic effects can be explained by shortcomings in the different template sets and the use of priors on the one hand and an insufficient training set on the other hand. Scatters of 4-8% in Delta_z/(1+z) were obtained, consistent with other studies. However, somewhat larger outlier rates (>7.5% with Delta_z/(1+z)>0.15; >4.5% after cleaning) are found for all codes. There is a general trend that empirical codes produce smaller biases than template-based codes. The systematic, quantitative comparison of different photo-z codes presented here is a snapshot of the current state-of-the-art of photo-z estimation and sets a standard for the assessment of photo-z accuracy in the future. The rather large outlier rates reported here for PHAT1 on real data should be investigated further since they are most probably also present (and possibly hidden) in many other studies. The test data sets are publicly available and can be used to compare new methods to established ones and help in guiding future photo-z method development. (abridged)
We investigate the accuracy of weak lensing simulations by comparing the results of five independently developed lensing simulation codes run on the same input $N$-body simulation. Our comparison focuses on the lensing convergence maps produced by th e codes, and in particular on the corresponding PDFs, power spectra and peak counts. We find that the convergence power spectra of the lensing codes agree to $lesssim 2%$ out to scales $ell approx 4000$. For lensing peak counts, the agreement is better than $5%$ for peaks with signal-to-noise $lesssim 6$. We also discuss the systematic errors due to the Born approximation, line-of-sight discretization, particle noise and smoothing. The lensing codes tested deal in markedly different ways with these effects, but they nonetheless display a satisfactory level of agreement. Our results thus suggest that systematic errors due to the operation of existing lensing codes should be small. Moreover their impact on the convergence power spectra for a lensing simulation can be predicted given its numerical details, which may then serve as a validation test.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا