ترغب بنشر مسار تعليمي؟ اضغط هنا

Photo-z Performance for Precision Cosmology II : Empirical Verification

154   0   0.0 ( 0 )
 نشر من قبل Rongmon Bordoloi
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The success of future large scale weak lensing surveys will critically depend on the accurate estimation of photometric redshifts of very large samples of galaxies. This in turn depends on both the quality of the photometric data and the photo-z estimators. In a previous study, (Bordoloi et al. 2010) we focussed primarily on the impact of photometric quality on photo-z estimates and on the development of novel techniques to construct the N(z) of tomographic bins at the high level of precision required for precision cosmology, as well as the correction of issues such as imprecise corrections for Galactic reddening. We used the same set of templates to generate the simulated photometry as were then used in the photo-z code, thereby removing any effects of template error. In this work we now include the effects of template error by generating simulated photometric data set from actual COSMOS photometry. We use the trick of simulating redder photometry of galaxies at higher redshifts by using a bluer set of passbands on low z galaxies with known redshifts. We find that template error is a rather small factor in photo-z performance, at the photometric precision and filter complement expected for all-sky surveys. With only a small sub-set of training galaxies with spectroscopic redshifts, it is in principle possible to construct tomographic redshift bins whose mean redshift is known, from photo-z alone, to the required accuracy of 0.002(1+z).



قيم البحث

اقرأ أيضاً

The EXtreme PREcision Spectrograph (EXPRES) is a new Doppler spectrograph designed to reach a radial velocity measurement precision sufficient to detect Earth-like exoplanets orbiting nearby, bright stars. We report on extensive laboratory testing an d on-sky observations to quantitatively assess the instrumental radial velocity measurement precision of EXPRES, with a focused discussion of individual terms in the instrument error budget. We find that EXPRES can reach a single-measurement instrument calibration precision better than 10 cm/s, not including photon noise from stellar observations. We also report on the performance of the various environmental, mechanical, and optical subsystems of EXPRES, assessing any contributions to radial velocity error. For atmospheric and telescope related effects, this includes the fast tip-tilt guiding system, atmospheric dispersion compensation, and the chromatic exposure meter. For instrument calibration, this includes the laser frequency comb (LFC), flat-field light source, CCD detector, and effects in the optical fibers. Modal noise is mitigated to a negligible level via a chaotic fiber agitator, which is especially important for wavelength calibration with the LFC. Regarding detector effects, we empirically assess the impact on radial velocity precision due to pixel-position non-uniformities (PPNU) and charge transfer inefficiency (CTI). EXPRES has begun its science survey to discover exoplanets orbiting G-dwarf and K-dwarf stars, in addition to transit spectroscopy and measurements of the Rossiter-McLaughlin effect.
71 - Rachel Mandelbaum 2017
Weak gravitational lensing, the deflection of light by mass, is one of the best tools to constrain the growth of cosmic structure with time and reveal the nature of dark energy. I discuss the sources of systematic uncertainty in weak lensing measurem ents and their theoretical interpretation, including our current understanding and other options for future improvement. These include long-standing concerns such as the estimation of coherent shears from galaxy images or redshift distributions of galaxies selected based on photometric redshifts, along with systematic uncertainties that have received less attention to date because they are subdominant contributors to the error budget in current surveys. I also discuss methods for automated systematics detection using survey data of the 2020s. The goal of this review is to describe the current state of the field and what must be done so that if weak lensing measurements lead toward surprising conclusions about key questions such as the nature of dark energy, those conclusions will be credible.
Matched filters are routinely used in cosmology in order to detect galaxy clusters from mm observations through their thermal Sunyaev-Zeldovich (tSZ) signature. In addition, they naturally provide an observable, the detection signal-to-noise or signi ficance, which can be used as a mass proxy in number counts analyses of tSZ-selected cluster samples. In this work, we show that this observable is, in general, non-Gaussian, and that it suffers from a positive bias, which we refer to as optimisation bias. Both aspects arise from the fact that the signal-to-noise is constructed through an optimisation operation on noisy data, and hold even if the cluster signal is modelled perfectly well, no foregrounds are present, and the noise is Gaussian. After reviewing the general mathematical formalism underlying matched filters, we study the statistics of the signal-to-noise with a set Monte Carlo mock observations, finding it to be well-described by a unit-variance Gaussian for signal-to-noise values of 6 and above, and quantify the magnitude of the optimisation bias, for which we give an approximate expression that may be used in practice. We also consider the impact of the bias on the cluster number counts of Planck and the Simons Observatory (SO), finding it to be negligible for the former and potentially significant for the latter.
357 - Steen Hannestad 2016
I review the current status of structure formation bounds on neutrino properties such as mass and energy density. I also discuss future cosmological bounds as well as a variety of different scenarios for reconciling cosmology with the presence of light sterile neutrinos.
149 - Darren S. Reed 2012
Cosmological surveys aim to use the evolution of the abundance of galaxy clusters to accurately constrain the cosmological model. In the context of LCDM, we show that it is possible to achieve the required percent level accuracy in the halo mass func tion with gravity-only cosmological simulations, and we provide simulation start and run parameter guidelines for doing so. Some previous works have had sufficient statistical precision, but lacked robust verification of absolute accuracy. Convergence tests of the mass function with, for example, simulation start redshift can exhibit false convergence of the mass function due to counteracting errors, potentially misleading one to infer overly optimistic estimations of simulation accuracy. Percent level accuracy is possible if initial condition particle mapping uses second order Lagrangian Perturbation Theory, and if the start epoch is between 10 and 50 expansion factors before the epoch of halo formation of interest. The mass function for halos with fewer than ~1000 particles is highly sensitive to simulation parameters and start redshift, implying a practical minimum mass resolution limit due to mass discreteness. The narrow range in converged start redshift suggests that it is not presently possible for a single simulation to capture accurately the cluster mass function while also starting early enough to model accurately the numbers of reionisation era galaxies, whose baryon feedback processes may affect later cluster properties. Ultimately, to fully exploit current and future cosmological surveys will require accurate modeling of baryon physics and observable properties, a formidable challenge for which accurate gravity-only simulations are just an initial step.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا