Do you want to publish a course? Click here

Machine-learning Approaches to Exoplanet Transit Detection and Candidate Validation in Wide-field Ground-based Surveys

68   0   0.0 ( 0 )
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

Since the start of the Wide Angle Search for Planets (WASP) program, more than 160 transiting exoplanets have been discovered in the WASP data. In the past, possible transit-like events identified by the WASP pipeline have been vetted by human inspection to eliminate false alarms and obvious false positives. The goal of the present paper is to assess the effectiveness of machine learning as a fast, automated, and reliable means of performing the same functions on ground-based wide-field transit-survey data without human intervention. To this end, we have created training and test datasets made up of stellar light curves showing a variety of signal types including planetary transits, eclipsing binaries, variable stars, and non-periodic signals. We use a combination of machine learning methods including Random Forest Classifiers (RFCs) and Convolutional Neural Networks (CNNs) to distinguish between the different types of signals. The final algorithms correctly identify planets in the test data ~90% of the time, although each method on its own has a significant fraction of false positives. We find that in practice, a combination of different methods offers the best approach to identifying the most promising exoplanet transit candidates in data from WASP, and by extension similar transit surveys.



rate research

Read More

We introduce a new machine learning based technique to detect exoplanets using the transit method. Machine learning and deep learning techniques have proven to be broadly applicable in various scientific research areas. We aim to exploit some of these methods to improve the conventional algorithm based approaches presently used in astrophysics to detect exoplanets. Using the time-series analysis library TSFresh to analyse light curves, we extracted 789 features from each curve, which capture the information about the characteristics of a light curve. We then used these features to train a gradient boosting classifier using the machine learning tool lightgbm. This approach was tested on simulated data, which showed that is more effective than the conventional box least squares fitting (BLS) method. We further found that our method produced comparable results to existing state-of-the-art deep learning models, while being much more computationally efficient and without needing folded and secondary views of the light curves. For Kepler data, the method is able to predict a planet with an AUC of 0.948, so that 94.8 per cent of the true planet signals are ranked higher than non-planet signals. The resulting recall is 0.96, so that 96 per cent of real planets are classified as planets. For the Transiting Exoplanet Survey Satellite (TESS) data, we found our method can classify light curves with an accuracy of 0.98, and is able to identify planets with a recall of 0.82 at a precision of 0.63.
Context. The TESS and PLATO missions are expected to find vast numbers of new transiting planet candidates. However, only a fraction of these candidates will be legitimate planets, and the candidate validation will require a significant amount of follow-up resources. Radial velocity follow-up can be carried out only for the most promising candidates around bright, slowly rotating, stars. Thus, before devoting RV resources to candidates, they need to be vetted using cheaper methods, and, in the cases for which an RV confirmation is not feasible, the candidates true nature needs to be determined based on these alternative methods alone. Aims. We study the applicability of multicolour transit photometry in the validation of transiting planet candidates when the candidate signal arises from a real astrophysical source. We seek to answer how securely can we estimate the true uncontaminated star-planet radius ratio when the light curve may contain contamination from unresolved light sources inside the photometry aperture when combining multicolour transit observations with a physics-based contamination model. Methods. The study is based on simulations and ground-based transit observations. The analyses are carried out with a contamination model integrated into the PyTransit v2 transit modelling package, and the observations are carried out with the MuSCAT2 multicolour imager installed in the 1.5 m TCS in the Teide Observatory. Results. We show that multicolour transit photometry can be used to estimate the amount of flux contamination and the true radius ratio. Combining the true radius ratio with an estimate for the stellar radius yields the true absolute radius of the transiting object, which is a valuable quantity in statistical candidate validation, and enough in itself to validate a candidate whose radius falls below the theoretical lower limit for a brown dwarf.
State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new automated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the texttt{autovet} code used to implement the algorithm publicly accessible. texttt{autovet} is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.
We describe a new metric that uses machine learning to determine if a periodic signal found in a photometric time series appears to be shaped like the signature of a transiting exoplanet. This metric uses dimensionality reduction and k-nearest neighbors to determine whether a given signal is sufficiently similar to known transits in the same data set. This metric is being used by the Kepler Robovetter to determine which signals should be part of the Q1-Q17 DR24 catalog of planetary candidates. The Kepler Mission reports roughly 20,000 potential transiting signals with each run of its pipeline, yet only a few thousand appear sufficiently transit shaped to be part of the catalog. The other signals tend to be variable stars and instrumental noise. With this metric we are able to remove more than 90% of the non-transiting signals while retaining more than 99% of the known planet candidates. When tested with injected transits, less than 1% are lost. This metric will enable the Kepler mission and future missions looking for transiting planets to rapidly and consistently find the best planetary candidates for follow-up and cataloging.
Galaxy morphology is a fundamental quantity, that is essential not only for the full spectrum of galaxy-evolution studies, but also for a plethora of science in observational cosmology. While a rich literature exists on morphological-classification techniques, the unprecedented data volumes, coupled, in some cases, with the short cadences of forthcoming Big-Data surveys (e.g. from the LSST), present novel challenges for this field. Large data volumes make such datasets intractable for visual inspection (even via massively-distributed platforms like Galaxy Zoo), while short cadences make it difficult to employ techniques like supervised machine-learning, since it may be impractical to repeatedly produce training sets on short timescales. Unsupervised machine learning, which does not require training sets, is ideally suited to the morphological analysis of new and forthcoming surveys. Here, we employ an algorithm that performs clustering of graph representations, in order to group image patches with similar visual properties and objects constructed from those patches, like galaxies. We implement the algorithm on the Hyper-Suprime-Cam Subaru-Strategic-Program Ultra-Deep survey, to autonomously reduce the galaxy population to a small number (160) of morphological clusters, populated by galaxies with similar morphologies, which are then benchmarked using visual inspection. The morphological classifications (which we release publicly) exhibit a high level of purity, and reproduce known trends in key galaxy properties as a function of morphological type at z<1 (e.g. stellar-mass functions, rest-frame colours and the position of galaxies on the star-formation main sequence). Our study demonstrates the power of unsupervised machine learning in performing accurate morphological analysis, which will become indispensable in this new era of deep-wide surveys.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا