No Arabic abstract
Deep learning techniques have been well explored in the transiting exoplanet field, however previous work mainly focuses on classification and inspection. In this work, we develop a novel detection algorithm based on a well-proven object detection framework in the computer vision field. Through training the network on the light curves of the confirmed Kepler exoplanets, our model yields 94% precision and 95% recall for transits with signal-to-noise ratio higher than 6 (set the confidence threshold to 0.6). Giving a slightly lower confidence threshold, recall can reach higher than 97%, which makes our model applicable for large-scale search. We also transfer the trained model to the TESS data and obtain similar performance. The results of our algorithm match the intuition of the human visual perception and make it easy to find single transiting candidates. Moreover, the parameters of the output bounding boxes can also help to find multiplanet systems. Our network and detection functions are implemented in the Deep-Transit toolkit, which is an open-source Python package hosted on GitHub and PyPI.
We describe a new metric that uses machine learning to determine if a periodic signal found in a photometric time series appears to be shaped like the signature of a transiting exoplanet. This metric uses dimensionality reduction and k-nearest neighbors to determine whether a given signal is sufficiently similar to known transits in the same data set. This metric is being used by the Kepler Robovetter to determine which signals should be part of the Q1-Q17 DR24 catalog of planetary candidates. The Kepler Mission reports roughly 20,000 potential transiting signals with each run of its pipeline, yet only a few thousand appear sufficiently transit shaped to be part of the catalog. The other signals tend to be variable stars and instrumental noise. With this metric we are able to remove more than 90% of the non-transiting signals while retaining more than 99% of the known planet candidates. When tested with injected transits, less than 1% are lost. This metric will enable the Kepler mission and future missions looking for transiting planets to rapidly and consistently find the best planetary candidates for follow-up and cataloging.
Since the start of the Wide Angle Search for Planets (WASP) program, more than 160 transiting exoplanets have been discovered in the WASP data. In the past, possible transit-like events identified by the WASP pipeline have been vetted by human inspection to eliminate false alarms and obvious false positives. The goal of the present paper is to assess the effectiveness of machine learning as a fast, automated, and reliable means of performing the same functions on ground-based wide-field transit-survey data without human intervention. To this end, we have created training and test datasets made up of stellar light curves showing a variety of signal types including planetary transits, eclipsing binaries, variable stars, and non-periodic signals. We use a combination of machine learning methods including Random Forest Classifiers (RFCs) and Convolutional Neural Networks (CNNs) to distinguish between the different types of signals. The final algorithms correctly identify planets in the test data ~90% of the time, although each method on its own has a significant fraction of false positives. We find that in practice, a combination of different methods offers the best approach to identifying the most promising exoplanet transit candidates in data from WASP, and by extension similar transit surveys.
Ground-based $gamma$-ray observatories, such as the VERITAS array of imaging atmospheric Cherenkov telescopes, provide insight into very-high-energy (VHE, $mathrm{E}>100,mathrm{GeV}$) astrophysical transient events. Examples include the evaporation of primordial black holes, gamma-ray bursts and flaring blazars. Identifying such events with a serendipitous location and time of occurrence is difficult. Thus, employing a robust search method becomes crucial. An implementation of a transient detection method based on deep-learning techniques for VERITAS will be presented. This data-driven approach significantly reduces the dependency on the characterization of the instrument response and the modelling of the expected transient signal. The response of the instrument is affected by various factors, such as the elevation of the source and the night sky background. The study of these effects allows enhancing the deep learning method with additional parameters to infer their influences on the data. This improves the performance and stability for a wide range of observational conditions. We illustrate our method for an historic flare of the blazar BL Lac that was detected by VERITAS in October 2016. We find a promising performance for the detection of such a flare in timescales of minutes that compares well with the VERITAS standard analysis.
Photometric observations of exoplanet transits can be used to derive the orbital and physical parameters of an exoplanet. We analyzed several transit light curves of exoplanets that are suitable for ground-based observations whose complete information is available on the Exoplanet Transit Database (ETD). We analyzed transit data of planets including HAT-P-8 b, HAT-P-16 b, HAT-P-21 b, HAT-P-22 b, HAT-P-28 b and HAT-P-30 b using the AstroImageJ (AIJ) software package. In this paper, we investigated 82 transit light curves from ETD, deriving their physical parameters as well as computing their mid-transit times for future Transit Timing Variation (TTV) analyses. The Precise values of the parameters show that using AIJ as a fitting tool for follow-up observations can lead to results comparable to the values at the NASA Exoplanet Archive (the NEA). Such information will be invaluable considering the numbers of future discoveries from the ground and space-based exoplanet surveys.
Object detection in natural scenes can be a challenging task. In many real-life situations, the visible spectrum is not suitable for traditional computer vision tasks. Moving outside the visible spectrum range, such as the thermal spectrum or the near-infrared (NIR) images, is much more beneficial in low visibility conditions, NIR images are very helpful for understanding the objects material quality. In this work, we have taken images with both the Thermal and NIR spectrum for the object detection task. As multi-spectral data with both Thermal and NIR is not available for the detection task, we needed to collect data ourselves. Data collection is a time-consuming process, and we faced many obstacles that we had to overcome. We train the YOLO v3 network from scratch to detect an object from multi-spectral images. Also, to avoid overfitting, we have done data augmentation and tune hyperparameters.