No Arabic abstract
The VST Optical Imaging of the CDFS and ES1 Fields (VOICE) Survey, in synergy with the SUDARE survey, is a deep optical $ugri$ imaging of the CDFS and ES1 fields using the VLT Survey Telescope (VST). The observations for the CDFS field comprise about 4.38 deg$^2$ down to $rsim26$ mag. The total on-sky time spans over four years in this field, distributed over four adjacent sub-fields. In this paper, we use the multi-epoch $r$-band imaging data to measure the variability of the detected objects and search for transients. We perform careful astrometric and photometric calibrations and point spread function (PSF) modeling. A new method, referring to as differential running-average photometry, is proposed to measure the light curves of the detected objects. With the method, the difference of PSFs between different epochs can be reduced, and the background fluctuations are also suppressed. Detailed uncertainty analysis and detrending corrections on the light curves are performed. We visually inspect the light curves to select variable objects, and present some objects with interesting light curves. Further investigation of these objects in combination with multi-band data will be presented in our forthcoming paper.
Optical variability has proven to be an effective way of detecting AGNs in imaging surveys, lasting from weeks to years. In the present work we test its use as a tool to identify AGNs in the VST multi-epoch survey of the COSMOS field, originally tailored to detect supernova events. We make use of the multi-wavelength data provided by other COSMOS surveys to discuss the reliability of the method and the nature of our AGN candidates. Our selection returns a sample of 83 AGN candidates; based on a number of diagnostics, we conclude that 67 of them are confirmed AGNs (81% purity), 12 are classified as supernovae, while the nature of the remaining 4 is unknown. For the subsample of AGNs with some spectroscopic classification, we find that Type 1 are prevalent (89%) compared to Type 2 AGNs (11%). Overall, our approach is able to retrieve on average 15% of all AGNs in the field identified by means of spectroscopic or X-ray classification, with a strong dependence on the source apparent magnitude. In particular, the completeness for Type 1 AGNs is 25%, while it drops to 6% for Type 2 AGNs. The rest of the X-ray selected AGN population presents on average a larger r.m.s. variability than the bulk of non variable sources, indicating that variability detection for at least some of these objects is prevented only by the photometric accuracy of the data. We show how a longer observing baseline would return a larger sample of AGN candidates. Our results allow us to assess the usefulness of this AGN selection technique in view of future wide-field surveys.
We use the Richardson-Lucy deconvolution algorithm to extract one dimensional (1D) spectra from LAMOST spectrum images. Compared with other deconvolution algorithms, this algorithm is much more fast. The practice on a real LAMOST image illustrates that the 1D resulting spectrum of this method has a higher SNR and resolution than those extracted by the LAMOST pipeline. Furthermore, our algorithm can effectively depress the ringings that are often shown in the 1D resulting spectra of other deconvolution methods.
Identification of anomalous light curves within time-domain surveys is often challenging. In addition, with the growing number of wide-field surveys and the volume of data produced exceeding astronomers ability for manual evaluation, outlier and anomaly detection is becoming vital for transient science. We present an unsupervised method for transient discovery using a clustering technique and the Astronomaly package. As proof of concept, we evaluate 85553 minute-cadenced light curves collected over two 1.5 hour periods as part of the Deeper, Wider, Faster program, using two different telescope dithering strategies. By combining the clustering technique HDBSCAN with the isolation forest anomaly detection algorithm via the visual interface of Astronomaly, we are able to rapidly isolate anomalous sources for further analysis. We successfully recover the known variable sources, across a range of catalogues from within the fields, and find a further 7 uncatalogued variables and two stellar flare events, including a rarely observed ultra fast flare (5 minute) from a likely M-dwarf.
In this paper we discuss a simple method of testing for the presence of energy-dependent dispersion in high energy data-sets. It uses the minimisation of the Kolmogorov distance between the cumulative distribution of two probability functions as the statistical metric to estimate the magnitude of any spectral dispersion within transient features in a light-curve and we also show that it performs well in the presence of modest energy resolutions (~20%) typical of gamma-ray observations. After presenting the method in detail we apply it to a parameterised simulated lightcurve based on the extreme VHE gamma-ray flare of PKS 2155-304 observed with H.E.S.S. in 2006, in order to illustrate its potential through the concrete example of setting constraints on quantum-gravity induced Lorentz invariance violation (LIV) effects. We obtain comparable limits to those of the most advanced techniques used in LIV searches applied to similar datasets, but the present method has the advantage of being particularly straightforward to use. Whilst the development of the method was motivated by LIV searches, it is also applicable to other astrophysical situations where energy-dependent dispersion is expected, such as spectral lags from the acceleration and cooling of particles in relativistic outflows.
The study of the atmospheres of transiting exoplanets requires a photometric precision, and repeatability, of one part in $sim 10^4$. This is beyond the original calibration plans of current observatories, hence the necessity to disentangle the instrumental systematics from the astrophysical signals in raw datasets. Most methods used in the literature are based on an approximate instrument model. The choice of parameters of the model and their functional forms can sometimes be subjective, causing controversies in the literature. Recently, Morello et al. (2014, 2015) have developed a non-parametric detrending method that gave coherent and repeatable results when applied to Spitzer/IRAC datasets that were debated in the literature. Said method is based on Independent Component Analysis (ICA) of individual pixel time-series, hereafter pixel-ICA. The main purpose of this paper is to investigate the limits and advantages of pixel-ICA on a series of simulated datasets with different instrument properties, and a range of jitter timescales and shapes, non-stationarity, sudden change points, etc. The performances of pixel-ICA are compared against the ones of other methods, in particular polynomial centroid division (PCD), and pixel-level decorrelation (PLD) method (Deming et al. 2014). We find that in simulated cases pixel-ICA performs as well or better than other methods, and it also guarantees a higher degree of objectivity, because of its purely statistical foundation with no prior information on the instrument systematics. The results of this paper, together with previous analyses of Spitzer/IRAC datasets, suggest that photometric precision and repeatability of one part in $10^4$ can be achieved with current infrared space instruments.