No Arabic abstract
The CoRoT space mission was operating for almost 6 years, producing thousands of continuous photometric light curves. The temporal series of exposures are processed by the production pipeline, correcting the data for known instrumental effects. But even after these model-based corrections, some collective trends are still visible in the light curves. We propose here a simple exposure-based algorithm to remove instrumental effects. The effect of each exposure is a function of only two instrumental stellar parameters, position on the CCD and photometric aperture. The effect is not a function of the stellar flux, and therefore much more robust. As an example, we show that the $sim2%$ long-term variation of the early run LRc01 is nicely detrended on average. This systematics removal process is part of the CoRoT legacy data pipeline.
We present a matched-filter based algorithm for transit detection and its application to simulated COROT light curves. This algorithm stems from the work by Borde, Rouan & Leger (2003). We describe the different steps we intend to take to discriminate between planets and stellar companions using the three photometric bands provided by COROT. These steps include the search for secondary transits, the search for ellipsoidal variability, and the study of transit chromaticity. We also discuss the performance of this approach in the context of blind tests organized inside the COROT exoplanet consortium.
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT spacebased survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh & Guterman et al. and a phenomenological correction was proposed. Here we tie the observed effect a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Within the last years, the classification of variable stars with Machine Learning has become a mainstream area of research. Recently, visualization of time series is attracting more attention in data science as a tool to visually help scientists to recognize significant patterns in complex dynamics. Within the Machine Learning literature, dictionary-based methods have been widely used to encode relevant parts of image data. These methods intrinsically assign a degree of importance to patches in pictures, according to their contribution in the image reconstruction. Inspired by dictionary-based techniques, we present an approach that naturally provides the visualization of salient parts in astronomical light curves, making the analogy between image patches and relevant pieces in time series. Our approach encodes the most meaningful patterns such that we can approximately reconstruct light curves by just using the encoded information. We test our method in light curves from the OGLE-III and StarLight databases. Our results show that the proposed model delivers an automatic and intuitive visualization of relevant light curve parts, such as local peaks and drops in magnitude.
Instrumental data are affected by systematic effects that dominate the errors and can be relevant when searching for small signals. This is the case of the K2 mission, a follow up of the Kepler mission, that, after a failure on two reaction wheels, has lost its stability properties rising strongly the systematics in the light curves and reducing its photometric precision. In this work, we have developed a general method to remove time related systematics from a set of light curves, that has been applied to K2 data. The method uses the Principal Component Analysis to retrieve the correlation between the light curves due to the systematics and to remove its effect without knowing any information other than the data itself. We have applied the method to all the K2 campaigns available at the Mikulski Archive for Space Telescopes, and we have tested the effectiveness of the procedure and its capability in preserving the astrophysical signal on a few transits and on eclipsing binaries. One product of this work is the identification of stable sources along the ecliptic plane that can be used as photometric calibrators for the upcoming Atmospheric Remote-sensing Exoplanet Large-survey mission.
We propose a new information theoretic metric for finding periodicities in stellar light curves. Light curves are astronomical time series of brightness over time, and are characterized as being noisy and unevenly sampled. The proposed metric combines correntropy (generalized correlation) with a periodic kernel to measure similarity among samples separated by a given period. The new metric provides a periodogram, called Correntropy Kernelized Periodogram (CKP), whose peaks are associated with the fundamental frequencies present in the data. The CKP does not require any resampling, slotting or folding scheme as it is computed directly from the available samples. CKP is the main part of a fully-automated pipeline for periodic light curve discrimination to be used in astronomical survey databases. We show that the CKP method outperformed the slotted correntropy, and conventional methods used in astronomy for periodicity discrimination and period estimation tasks, using a set of light curves drawn from the MACHO survey. The proposed metric achieved 97.2% of true positives with 0% of false positives at the confidence level of 99% for the periodicity discrimination task; and 88% of hits with 11.6% of multiples and 0.4% of misses in the period estimation task.