ترغب بنشر مسار تعليمي؟ اضغط هنا

An Overview of the LSST Image Processing Pipelines

71   0   0.0 ( 0 )
 نشر من قبل James Bosch
 تاريخ النشر 2018
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The Large Synoptic Survey Telescope (LSST) is an ambitious astronomical survey with a similarly ambitious Data Management component. Data Management for LSST includes processing on both nightly and yearly cadences to generate transient alerts, deep catalogs of the static sky, and forced photometry light-curves for billions of objects at hundreds of epochs, spanning at least a decade. The algorithms running in these pipelines are individually sophisticated and interact in subtle ways. This paper provides an overview of those pipelines, focusing more on those interactions than the details of any individual algorithm.


قيم البحث

اقرأ أيضاً

The past few decades have seen the burgeoning of wide field, high cadence surveys, the most formidable of which will be the Legacy Survey of Space and Time (LSST) to be conducted by the Vera C. Rubin Observatory. So new is the field of systematic tim e-domain survey astronomy, however, that major scientific insights will continue to be obtained using smaller, more flexible systems than the LSST. One such example is the Gravitational-wave Optical Transient Observer (GOTO), whose primary science objective is the optical follow-up of Gravitational Wave events. The amount and rate of data production by GOTO and other wide-area, high-cadence surveys presents a significant challenge to data processing pipelines which need to operate in near real-time to fully exploit the time-domain. In this study, we adapt the Rubin Observatory LSST Science Pipelines to process GOTO data, thereby exploring the feasibility of using this off-the-shelf pipeline to process data from other wide-area, high-cadence surveys. In this paper, we describe how we use the LSST Science Pipelines to process raw GOTO frames to ultimately produce calibrated coadded images and photometric source catalogues. After comparing the measured astrometry and photometry to those of matched sources from PanSTARRS DR1, we find that measured source positions are typically accurate to sub-pixel levels, and that measured L-band photometries are accurate to $sim50$ mmag at $m_Lsim16$ and $sim200$ mmag at $m_Lsim18$. These values compare favourably to those obtained using GOTOs primary, in-house pipeline, GOTOPHOTO, in spite of both pipelines having undergone further development and improvement beyond the implementations used in this study. Finally, we release a generic obs package that others can build-upon should they wish to use the LSST Science Pipelines to process data from other facilities.
We have adapted the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) Science Pipelines to process data from the Gravitational-Wave Optical Transient Observer (GOTO) prototype. In this paper, we describe how we used the Rubin Observato ry LSST Science Pipelines to conduct forced photometry measurements on nightly GOTO data. By comparing the photometry measurements of sources taken on multiple nights, we find that the precision of our photometry is typically better than 20~mmag for sources brighter than 16 mag. We also compare our photometry measurements against colour-corrected PanSTARRS photometry, and find that the two agree to within 10~mmag (1$sigma$) for bright (i.e., $sim14^{rm th}$~mag) sources to 200~mmag for faint (i.e., $sim18^{rm th}$~mag) sources. Additionally, we compare our results to those obtained by GOTOs own in-house pipeline, {sc gotophoto}, and obtain similar results. Based on repeatability measurements, we measure a $5sigma$ L-band survey depth of between 19 and 20 magnitudes, depending on observing conditions. We assess, using repeated observations of non-varying standard SDSS stars, the accuracy of our uncertainties, which we find are typically overestimated by roughly a factor of two for bright sources (i.e., $<15^{rm th}$~mag), but slightly underestimated (by roughly a factor of 1.25) for fainter sources ($>17^{rm th}$~mag). Finally, we present lightcurves for a selection of variable sources, and compare them to those obtained with the Zwicky Transient Factory and GAIA. Despite the Rubin Observatory LSST Science Pipelines still undergoing active development, our results show that they are already delivering robust forced photometry measurements from GOTO data.
Writing high-performance image processing code is challenging and labor-intensive. The Halide programming language simplifies this task by decoupling high-level algorithms from schedules which optimize their implementation. However, even with this ab straction, it is still challenging for Halide programmers to understand complicated scheduling strategies and productively write valid, optimized schedules. To address this, we propose a programming support method called guided optimization. Guided optimization provides programmers a set of valid optimization options and interactive feedback about their current choices, which enables them to comprehend and efficiently optimize image processing code without the time-consuming trial-and-error process of traditional text editors. We implemented a proof-of-concept system, Roly-poly, which integrates guided optimization, program visualization, and schedule cost estimation to support the comprehension and development of efficient Halide image processing code. We conducted a user study with novice Halide programmers and confirmed that Roly-poly and its guided optimization was informative, increased productivity, and resulted in higher-performing schedules in less time.
104 - J.A. Tyson 2008
Next generation probes of dark matter and dark energy require high precision reconstruction of faint galaxy shapes from hundreds of dithered exposures. Current practice is to stack the images. While valuable for many applications, this stack is a hig hly compressed version of the data. Future weak lensing studies will require analysis of the full dataset using the stack and its associated catalog only as a starting point. We describe a Multi-Fit algorithm which simultaneously fits individual galaxy exposures to a common profile model convolved with each exposures point spread function at that position in the image. This technique leads to an enhancement of the number of usable small galaxies at high redshift and, more significantly, a decrease in systematic shear error.
The Herschel Space Observatory is the fourth cornerstone mission in the ESA science programme and performs photometry and spectroscopy in the 55 - 672 micron range. The development of the Herschel Data Processing System started in 2002 to support the data analysis for Instrument Level Tests. The Herschel Data Processing System was used for the pre-flight characterisation of the instruments, and during various ground segment test campaigns. Following the successful launch of Herschel 14th of May 2009 the Herschel Data Processing System demonstrated its maturity when the first PACS preview observation of M51 was processed within 30 minutes of reception of the first science data after launch. Also the first HIFI observations on DR21 were successfully reduced to high quality spectra, followed by SPIRE observations on M66 and M74. A fast turn-around cycle between data retrieval and the production of science-ready products was demonstrated during the Herschel Science Demonstration Phase Initial Results Workshop held 7 months after launch, which is a clear proof that the system has reached a good level of maturity. We will summarise the scope, the management and development methodology of the Herschel Data Processing system, present some key software elements and give an overview about the current status and future development milestones.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا