ترغب بنشر مسار تعليمي؟ اضغط هنا

The Third Gravitational Lensing Accuracy Testing (GREAT3) Challenge Handbook

297   0   0.0 ( 0 )
 نشر من قبل Rachel Mandelbaum
 تاريخ النشر 2013
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure weak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include realistically complex galaxy models based on high-resolution imaging from space; spatially varying, physically-motivated blurring kernel; and combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information.



قيم البحث

اقرأ أيضاً

GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The Galaxy Challenge involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the Star Challenge that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from http://www.greatchallenges.info.
The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is an image analysis competition that aims to test algorithms to measure weak gravitational lensing from astronomical images. The challenge started in October 2013 and ends 30 April 2014 . The challenge focuses on testing the impact on weak lensing measurements of realistically complex galaxy morphologies, realistic point spread function, and combination of multiple different exposures. It includes simulated ground- and space-based data. The details of the challenge are described in [15], and the challenge website and its leader board can be found at http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/, respectively.
The GRavitational lEnsing Accuracy Testing 2008 (GREAT08) Challenge focuses on a problem that is of crucial importance for future observations in cosmology. The shapes of distant galaxies can be used to determine the properties of dark energy and the nature of gravity, because light from those galaxies is bent by gravity from the intervening dark matter. The observed galaxy images appear distorted, although only slightly, and their shapes must be precisely disentangled from the effects of pixelisation, convolution and noise. The worldwide gravitational lensing community has made significant progress in techniques to measure these distortions via the Shear TEsting Program (STEP). Via STEP, we have run challenges within our own community, and come to recognise that this particular image analysis problem is ideally matched to experts in statistical inference, inverse problems and computational learning. Thus, in order to continue the progress seen in recent years, we are seeking an infusion of new ideas from these communities. This document details the GREAT08 Challenge for potential participants. Please visit http://www.great08challenge.info for the latest information.
We investigate the accuracy of weak lensing simulations by comparing the results of five independently developed lensing simulation codes run on the same input $N$-body simulation. Our comparison focuses on the lensing convergence maps produced by th e codes, and in particular on the corresponding PDFs, power spectra and peak counts. We find that the convergence power spectra of the lensing codes agree to $lesssim 2%$ out to scales $ell approx 4000$. For lensing peak counts, the agreement is better than $5%$ for peaks with signal-to-noise $lesssim 6$. We also discuss the systematic errors due to the Born approximation, line-of-sight discretization, particle noise and smoothing. The lensing codes tested deal in markedly different ways with these effects, but they nonetheless display a satisfactory level of agreement. Our results thus suggest that systematic errors due to the operation of existing lensing codes should be small. Moreover their impact on the convergence power spectra for a lensing simulation can be predicted given its numerical details, which may then serve as a validation test.
Here we introduce PHAT, the PHoto-z Accuracy Testing programme, an international initiative to test and compare different methods of photo-z estimation. Two different test environments are set up, one (PHAT0) based on simulations to test the basic fu nctionality of the different photo-z codes, and another one (PHAT1) based on data from the GOODS survey. The accuracy of the different methods is expressed and ranked by the global photo-z bias, scatter, and outlier rates. Most methods agree well on PHAT0 but produce photo-z scatters that can differ by up to a factor of two even in this idealised case. A larger spread in accuracy is found for PHAT1. Few methods benefit from the addition of mid-IR photometry. Remaining biases and systematic effects can be explained by shortcomings in the different template sets and the use of priors on the one hand and an insufficient training set on the other hand. Scatters of 4-8% in Delta_z/(1+z) were obtained, consistent with other studies. However, somewhat larger outlier rates (>7.5% with Delta_z/(1+z)>0.15; >4.5% after cleaning) are found for all codes. There is a general trend that empirical codes produce smaller biases than template-based codes. The systematic, quantitative comparison of different photo-z codes presented here is a snapshot of the current state-of-the-art of photo-z estimation and sets a standard for the assessment of photo-z accuracy in the future. The rather large outlier rates reported here for PHAT1 on real data should be investigated further since they are most probably also present (and possibly hidden) in many other studies. The test data sets are publicly available and can be used to compare new methods to established ones and help in guiding future photo-z method development. (abridged)
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا