ترغب بنشر مسار تعليمي؟ اضغط هنا

DES Science Portal: Computing Photometric Redshifts

137   0   0.0 ( 0 )
 نشر من قبل Julia Gschwend
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

A significant challenge facing photometric surveys for cosmological purposes is the need to produce reliable redshift estimates. The estimation of photometric redshifts (photo-zs) has been consolidated as the standard strategy to bypass the high production costs and incompleteness of spectroscopic redshift samples. Training-based photo-z methods require the preparation of a high-quality list of spectroscopic redshifts, which needs to be constantly updated. The photo-z training, validation, and estimation must be performed in a consistent and reproducible way in order to accomplish the scientific requirements. To meet this purpose, we developed an integrated web-based data interface that not only provides the framework to carry out the above steps in a systematic way, enabling the ease testing and comparison of different algorithms, but also addresses the processing requirements by parallelizing the calculation in a transparent way for the user. This framework called the Science Portal (hereafter Portal) was developed in the context the Dark Energy Survey (DES) to facilitate scientific analysis. In this paper, we show how the Portal can provide a reliable environment to access vast data sets, provide validation algorithms and metrics, even in the case of multiple photo-zs methods. It is possible to maintain the provenance between the steps of a chain of workflows while ensuring reproducibility of the results. We illustrate how the Portal can be used to provide photo-z estimates using the DES first year (Y1A1) data. While the DES collaboration is still developing techniques to obtain more precise photo-zs, having a structured framework like the one presented here is critical for the systematic vetting of DES algorithmic improvements and the consistent production of photo-zs in the future DES releases.

قيم البحث

اقرأ أيضاً

We present a novel approach for creating science-ready catalogs through a software infrastructure developed for the Dark Energy Survey (DES). We integrate the data products released by the DES Data Management and additional products created by the DE S collaboration in an environment known as DES Science Portal. Each step involved in the creation of a science-ready catalog is recorded in a relational database and can be recovered at any time. We describe how the DES Science Portal automates the creation and characterization of lightweight catalogs for DES Year 1 Annual Release, and show its flexibility in creating multiple catalogs with different inputs and configurations. Finally, we discuss the advantages of this infrastructure for large surveys such as DES and the Large Synoptic Survey Telescope. The capability of creating science-ready catalogs efficiently and with full control of the inputs and configurations used is an important asset for supporting science analysis using data from large astronomical surveys.
We study the clustering of galaxies detected at $i<22.5$ in the Science Verification observations of the Dark Energy Survey (DES). Two-point correlation functions are measured using $2.3times 10^6$ galaxies over a contiguous 116 deg$^2$ region in fiv e bins of photometric redshift width $Delta z = 0.2$ in the range $0.2 < z < 1.2.$ The impact of photometric redshift errors are assessed by comparing results using a template-based photo-$z$ algorithm (BPZ) to a machine-learning algorithm (TPZ). A companion paper (Leistedt et al 2015) presents maps of several observational variables (e.g. seeing, sky brightness) which could modulate the galaxy density. Here we characterize and mitigate systematic errors on the measured clustering which arise from these observational variables, in addition to others such as Galactic dust and stellar contamination. After correcting for systematic effects we measure galaxy bias over a broad range of linear scales relative to mass clustering predicted from the Planck $Lambda$CDM model, finding agreement with CFHTLS measurements with $chi^2$ of 4.0 (8.7) with 5 degrees of freedom for the TPZ (BPZ) redshifts. We test a linear bias model, in which the galaxy clustering is a fixed multiple of the predicted non-linear dark-matter clustering. The precision of the data allow us to determine that the linear bias model describes the observed galaxy clustering to $2.5%$ accuracy down to scales at least $4$ to $10$ times smaller than those on which linear theory is expected to be sufficient.
Upcoming imaging surveys, such as LSST, will provide an unprecedented view of the Universe, but with limited resolution along the line-of-sight. Common ways to increase resolution in the third dimension, and reduce misclassifications, include observi ng a wider wavelength range and/or combining the broad-band imaging with higher spectral resolution data. The challenge with these approaches is matching the depth of these ancillary data with the original imaging survey. However, while a full 3D map is required for some science, there are many situations where only the statistical distribution of objects (dN/dz) in the line-of-sight direction is needed. In such situations, there is no need to measure the fluxes of individual objects in all of the surveys. Rather a stacking procedure can be used to perform an `ensemble photo-z. We show how a shallow, higher spectral resolution survey can be used to measure dN/dz for stacks of galaxies which coincide in a deeper, lower resolution survey. The galaxies in the deeper survey do not even need to appear individually in the shallow survey. We give a toy model example to illustrate tradeoffs and considerations for applying this method. This approach will allow deep imaging surveys to leverage the high resolution of spectroscopic and narrow/medium band surveys underway, even when the latter do not have the same reach to high redshift.
Las Cumbres Observatory (LCO) has deployed a network of ten identical 1-m telescopes to four locations. The global coverage and flexibility of the LCO network makes it ideal for discovery, follow-up, and characterization of all Solar System objects, and especially Near-Earth Objects (NEOs). We describe the LCO NEO Follow-up Network which makes use of the LCO network of robotic telescopes and an online, cloud-based web portal, NEOexchange, to perform photometric characterization and spectroscopic classification of NEOs and follow-up astrometry for both confirmed NEOs and unconfirmed NEO candidates. The follow-up astrometric, photometric, and spectroscopic characterization efforts are focused on those NEO targets that are due to be observed by the planetary radar facilities and those on the NHATS lists. Astrometry allows us to improve target orbits, making radar observations possible for objects with a short arc or large orbital uncertainty and also allows for the detection and measurement of the Yarkovsky effect on NEOs. Photometric & spectroscopic data allows us to determine the light curve shape and amplitude, measure rotation periods, determine the taxonomic classification, and improve the overall characterization of these targets. We describe the NEOexchange follow-up portal and the methodology adopted which allows the software to be packaged and deployed anywhere, including in off-site cloud services. This allows professionals, amateurs, and citizen scientists to plan, schedule and analyze NEO imaging and spectroscopy data using the LCO network and acts as a coordination hub for the NEO follow-up efforts. We illustrate these capabilities with examples of first period determinations for radar-targeted NEOs and its use to plan and execute multi-site photometric and spectroscopic observations of (66391) 1999 KW4, the subject of the most recent planetary defense exercise campaign.
Two of the main problems encountered in the development and accurate validation of photometric redshift (photo-z) techniques are the lack of spectroscopic coverage in feature space (e.g. colours and magnitudes) and the mismatch between photometric er ror distributions associated with the spectroscopic and photometric samples. Although these issues are well known, there is currently no standard benchmark allowing a quantitative analysis of their impact on the final photo-z estimation. In this work, we present two galaxy catalogues, Teddy and Happy, built to enable a more demanding and realistic test of photo-z methods. Using photometry from the Sloan Digital Sky Survey and spectroscopy from a collection of sources, we constructed datasets which mimic the biases between the underlying probability distribution of the real spectroscopic and photometric sample. We demonstrate the potential of these catalogues by submitting them to the scrutiny of different photo-z methods, including machine learning (ML) and template fitting approaches. Beyond the expected bad results from most ML algorithms for cases with missing coverage in feature space, we were able to recognize the superiority of global models in the same situation and the general failure across all types of methods when incomplete coverage is convoluted with the presence of photometric errors - a data situation which photo-z methods were not trained to deal with up to now and which must be addressed by future large scale surveys. Our catalogues represent the first controlled environment allowing a straightforward implementation of such tests. The data are publicly available within the COINtoolbox (https://github.com/COINtoolbox/photoz_catalogues).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا