No Arabic abstract
The current generation of all-sky surveys is rapidly expanding our ability to study variable and transient sources. These surveys, with a variety of sensitivities, cadences, and fields of view, probe many ranges of timescale and magnitude. Data from the Zwicky Transient Facility (ZTF) yields an opportunity to find variables on timescales from minutes to months. In this paper, we present the codebase, ztfperiodic, and the computational metrics employed for the catalogue based on ZTFs Second Data Release. We describe the publicly available, graphical-process-unit optimized period-finding algorithms employed, and highlight the benefit of existing and future graphical-process-unit clusters. We show how generating metrics as input to catalogues of this scale is possible for future ZTF data releases. Further work will be needed for future data from the Vera C. Rubin Observatorys Legacy Survey of Space and Time.
The Zwicky Transient Facility (ZTF) has been observing the entire northern sky since the start of 2018 down to a magnitude of 20.5 ($5 sigma$ for 30s exposure) in $g$, $r$, and $i$ filters. Over the course of two years, ZTF has obtained light curves of more than a billion sources, each with 50-1000 epochs per light curve in $g$ and $r$, and fewer in $i$. To be able to use the information contained in the light curves of variable sources for new scientific discoveries, an efficient and flexible framework is needed to classify them. In this paper, we introduce the methods and infrastructure which will be used to classify all ZTF light curves. Our approach aims to be flexible and modular and allows the use of a dynamical classification scheme and labels, continuously evolving training sets, and the use of different machine learning classifier types and architectures. With this setup, we are able to continuously update and improve the classification of ZTF light curves as new data becomes available, training samples are updated, and new classes need to be incorporated.
The ESA Gaia mission provides a unique time-domain survey for more than one billion sources brighter than G=20.7 mag. Gaia offers the unprecedented opportunity to study variability phenomena in the Universe thanks to multi-epoch G-magnitude photometry in addition to astrometry, blue and red spectro-photometry, and spectroscopy. Within the Gaia Consortium, Coordination Unit 7 has the responsibility to detect variable objects, classify them, derive characteristic parameters for specific variability classes, and provide global descriptions of variable phenomena. We describe the variability processing and analysis that we plan to apply to the successive data releases, and we present its application to the G-band photometry results of the first 14 months of Gaia operations that comprises 28 days of Ecliptic Pole Scanning Law and 13 months of Nominal Scanning Law. Out of the 694 million, all-sky, sources that have calibrated G-band photometry in this first stage of the mission, about 2.3 million sources that have at least 20 observations are located within 38 degrees from the South Ecliptic Pole. We detect about 14% of them as variable candidates, among which the automated classification identified 9347 Cepheid and RR Lyrae candidates. Additional visual inspections and selection criteria led to the publication of 3194 Cepheid and RR Lyrae stars, described in Clementini et al. (2016). Under the restrictive conditions for DR1, the completenesses of Cepheids and RR Lyrae stars are estimated at 67% and 58%, respectively, numbers that will significantly increase with subsequent Gaia data releases. Data processing within the Gaia Consortium is iterative, the quality of the data and the results being improved at each iteration. The results presented in this article show a glimpse of the exceptional harvest that is to be expected from the Gaia mission for variability phenomena. [abridged]
The GMRT Online Archive now houses over 120 terabytes of interferometric observations obtained with the GMRT since the observatory began operating as a facility in 2002. The utility of this vast data archive, likely the largest of any Indian telescope, can be significantly enhanced if first look (and where possible, science ready) processed images can be made available to the user community. We have initiated a project to pipeline process GMRT images in the 150, 240, 325 and 610 MHz bands. The thousands of processed continuum images that we will produce will prove useful in studies of distant galaxy clusters, radio AGN, as well as nearby galaxies and star forming regions. Besides the scientific returns, a uniform data processing pipeline run on a large volume of data can be used in other interesting ways. For example, we will be able to measure various performance characteristics of the GMRT telescope and their dependence on waveband, time of day, RFI environment, backend, galactic latitude etc. in a systematic way. A variety of data products such as calibrated UVFITS data, sky images and AIPS processing logs will be delivered to users via a web-based interface. Data products will be compatible with standard Virtual Observatory protocols.
The measurement of the diffuse $21$-cm radiation from the hyperfine transition of neutral hydrogen (HI signal) in different redshifts is an important tool for modern cosmology. However, detecting this faint signal with non-cryogenic receivers in single-dish telescopes is a challenging task. The BINGO (Baryon Acoustic Oscillations from Integrated Neutral Gas Observations) radio telescope is an instrument designed to detect baryonic acoustic oscillations (BAO) in the cosmological HI signal, in the redshift interval $0.127 le z le 0.449$. This paper describes the BINGO radio telescope, including the current status of the optics, receiver, observational strategy, calibration and the site. BINGO has been carefully designed to minimize systematics, being a transit instrument with no moving dishes and 28 horns operating in the frequency range $980 le u le 1260$ MHz. Comprehensive laboratory tests were conducted for many of the BINGO subsystems and the prototypes of the receiver chain, horn, polarizer, magic tees and transitions have been successfully tested between 2018-2020. The survey was designed to cover $sim 13%$ of the sky, with the primary mirror pointing at declination $delta=-15^{circ}$. The telescope will see an instantaneous declination strip of $14.75^{circ}$. The results of the prototype tests closely meet those obtained during the modelling process, suggesting BINGO will perform according to our expectations. After one year of observations with a 60% duty cycle, BINGO should achieve an expected sensitivity of $102 mu K$ for 28 horns and 30 redshift bins, considering one polarization and be able to measure the HI power spectrum in a competitive time frame.
The Taiwanese-American Occultation Survey (TAOS) project has collected more than a billion photometric measurements since 2005 January. These sky survey data-covering timescales from a fraction of a second to a few hundred days-are a useful source to study stellar variability. A total of 167 star fields, mostly along the ecliptic plane, have been selected for photometric monitoring with the TAOS telescopes. This paper presents our initial analysis of a search for periodic variable stars from the time-series TAOS data on one particular TAOS field, No. 151 (RA = 17$^{rm h}30^{rm m}6fs$67, Dec = 27degr17arcmin 30arcsec, J2000), which had been observed over 47 epochs in 2005. A total of 81 candidate variables are identified in the 3 square degree field, with magnitudes in the range 8 < R < 16. On the basis of the periodicity and shape of the lightcurves, 29 variables, 15 of which were previously unknown, are classified as RR Lyrae, Cepheid, delta Scuti, SX Phonencis, semi-regular and eclipsing binaries.