No Arabic abstract
We describe the difference imaging pipeline (DiffImg) used to detect transients in deep images from the Dark Energy Survey Supernova program (DES-SN) in its first observing season from Aug 2013 through Feb 2014. DES-SN is a search for transients in which ten 3-deg^2 fields are repeatedly observed in the g,r,i,z passbands with a cadence of about 1 week. The observing strategy has been optimized to measure high-quality light curves and redshifts for thousands of Type Ia supernova (SN Ia) with the goal of measuring dark energy parameters. The essential DiffImg functions are to align each search image to a deep reference image, do a pixel-by-pixel subtraction, and then examine the subtracted image for significant positive detections of point-source objects. The vast majority of detections are subtraction artifacts, but after selection requirements and image filtering with an automated scanning program, there are 130 detections per deg^2 per observation in each band, of which only 25% are artifacts. Of the 7500 transients discovered by DES-SN in its first observing season, each requiring a detection on at least 2 separate nights, Monte Carlo simulations predict that 27% are expected to be supernova. Another 30% of the transients are artifacts, and most of the remaining transients are AGN and variable stars. Fake SNe Ia are overlaid onto the images to rigorously evaluate detection efficiencies, and to understand the DiffImg performance. The DiffImg efficiency measured with fake SNe agrees well with expectations from a Monte Carlo simulation that uses analytical calculations of the fluxes and their uncertainties. In our 8 shallow fields with single-epoch 50% completeness depth 23.5, the SN Ia efficiency falls to 1/2 at redshift z 0.7, in our 2 deep fields with mag-depth 24.5, the efficiency falls to 1/2 at z 1.1.
The Dark Energy Survey (DES) is a five-year optical imaging campaign with the goal of understanding the origin of cosmic acceleration. DES performs a 5000 square degree survey of the southern sky in five optical bands (g,r,i,z,Y) to a depth of ~24th magnitude. Contemporaneously, DES performs a deep, time-domain survey in four optical bands (g,r,i,z) over 27 square degrees. DES exposures are processed nightly with an evolving data reduction pipeline and evaluated for image quality to determine if they need to be retaken. Difference imaging and transient source detection are also performed in the time domain component nightly. On a bi-annual basis, DES exposures are reprocessed with a refined pipeline and coadded to maximize imaging depth. Here we describe the DES image processing pipeline in support of DES science, as a reference for users of archival DES data, and as a guide for future astronomical surveys.
We describe an algorithm for identifying point-source transients and moving objects on reference-subtracted optical images containing artifacts of processing and instrumentation. The algorithm makes use of the supervised machine learning technique known as Random Forest. We present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (Sep. 2013 through Feb. 2014) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factor of 13.4, while only 1 percent of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. Here we characterize the algorithms performance in detail, and we discuss how it can inform pipeline design decisions for future time-domain imaging surveys, such as the Large Synoptic Survey Telescope and the Zwicky Transient Facility. An implementation of the algorithm and the training data used in this paper are available at http://portal.nersc.gov/project/dessn/autoscan.
The Transient Optical Sky Survey (TOSS) is an automated, ground-based telescope system dedicated to searching for optical transient events. Small telescope tubes are mounted on a tracking, semi-equatorial frame with a single polar axis. Each fixed declination telescope records successive exposures which overlap in right ascension. Nightly observations produce time-series images of fixed fields within each declination band. We describe the TOSS data pipeline, including automated routines used for image calibration, object detection and identification, astrometry, and differential photometry. Time series of nightly observations are accumulated in a database for each declination band. Despite the modest cost of the mechanical system, results from the 2009-2010 observing campaign confirm the systems capability for producing light curves of satisfactory accuracy. Transients can be extracted from the individual time-series by identifying deviations from baseline variability.
The Australian Square Kilometre Array Pathfinder (ASKAP) collects images of the sky at radio wavelengths with an unprecedented field of view, combined with a high angular resolution and sub-millijansky sensitivities. The large quantity of data produced is used by the ASKAP Variables and Slow Transients (VAST) survey science project to study the dynamic radio sky. Efficient pipelines are vital in such research, where searches often form a `needle in a haystack type of problem to solve. However, the existing pipelines developed among the radio-transient community are not suitable for the scale of ASKAP datasets. In this paper we provide a technical overview of the new VAST Pipeline: a modern and scalable Python-based data pipeline for transient searches, using up-to-date dependencies and methods. The pipeline allows source association to be performed at scale using the Pandas DataFrame interface and the well-known Astropy crossmatch functions. The Dask Python framework is used to parallelise operations as well as scale them both vertically and horizontally, by means of a cluster of workers. A modern web interface for data exploration and querying has also been developed using the latest Django web framework combined with Bootstrap.
Efficient identification and follow-up of astronomical transients is hindered by the need for humans to manually select promising candidates from data streams that contain many false positives. These artefacts arise in the difference images that are produced by most major ground-based time domain surveys with large format CCD cameras. This dependence on humans to reject bogus detections is unsustainable for next generation all-sky surveys and significant effort is now being invested to solve the problem computationally. In this paper we explore a simple machine learning approach to real-bogus classification by constructing a training set from the image data of ~32000 real astrophysical transients and bogus detections from the Pan-STARRS1 Medium Deep Survey. We derive our feature representation from the pixel intensity values of a 20x20 pixel stamp around the centre of the candidates. This differs from previous work in that it works directly on the pixels rather than catalogued domain knowledge for feature design or selection. Three machine learning algorithms are trained (artificial neural networks, support vector machines and random forests) and their performances are tested on a held-out subset of 25% of the training data. We find the best results from the random forest classifier and demonstrate that by accepting a false positive rate of 1%, the classifier initially suggests a missed detection rate of around 10%. However we also find that a combination of bright star variability, nuclear transients and uncertainty in human labelling means that our best estimate of the missed detection rate is approximately 6%.