ترغب بنشر مسار تعليمي؟ اضغط هنا

Discovery and Vetting of Exoplanets I: Benchmarking K2 Vetting Tools

127   0   0.0 ( 0 )
 نشر من قبل Veselin Kostov B
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We have adapted the algorithmic tools developed during the Kepler mission to vet the quality of transit-like signals for use on the K2 mission data. Using the four sets of publicly-available lightcurves on MAST, we produced a uniformly-vetted catalog of 772 transiting planet candidates from K2 as listed at the NASA Exoplanet archive in the K2 Table of Candidates. Our analysis marks 676 of these as planet candidates and 96 as false positives. All confirmed planets pass our vetting tests. 60 of our false positives are new identifications -- effectively doubling the overall number of astrophysical signals mimicking planetary transits in K2 data. Most of the targets listed as false positives in our catalog either show prominent secondary eclipses, transit depths suggesting a stellar companion instead of a planet, or significant photocenter shifts during transit. We packaged our tools into the open-source, automated vetting pipeline DAVE (Discovery and Vetting of Exoplanets) designed to streamline follow-up efforts by reducing the time and resources wasted observing targets that are likely false positives. DAVE will also be a valuable tool for analyzing planet candidates from NASAs TESS mission, where several guest-investigator programs will provide independent lightcurve sets -- and likely many more from the community. We are currently testing DAVE on recently-released TESS planet candidates and will present our results in a follow-up paper.



قيم البحث

اقرأ أيضاً

NASAs Transiting Exoplanet Survey Satellite (TESS) presents us with an unprecedented volume of space-based photometric observations that must be analyzed in an efficient and unbiased manner. With at least $sim1,000,000$ new light curves generated eve ry month from full frame images alone, automated planet candidate identification has become an attractive alternative to human vetting. Here we present a deep learning model capable of performing triage and vetting on TESS candidates. Our model is modified from an existing neural network designed to automatically classify Kepler candidates, and is the first neural network to be trained and tested on real TESS data. In triage mode, our model can distinguish transit-like signals (planet candidates and eclipsing binaries) from stellar variability and instrumental noise with an average precision (the weighted mean of precisions over all classification thresholds) of 97.0% and an accuracy of 97.4%. In vetting mode, the model is trained to identify only planet candidates with the help of newly added scientific domain knowledge, and achieves an average precision of 69.3% and an accuracy of 97.8%. We apply our model on new data from Sector 6, and present 288 new signals that received the highest scores in triage and vetting and were also identified as planet candidates by human vetters. We also provide a homogeneously classified set of TESS candidates suitable for future training.
The Kepler Mission was designed to identify and characterize transiting planets in the Kepler Field of View and to determine their occurrence rates. Emphasis was placed on identification of Earth-size planets orbiting in the Habitable Zone of their h ost stars. Science data were acquired for a period of four years. Long-cadence data with 29.4 min sampling were obtained for ~200,000 individual stellar targets in at least one observing quarter in the primary Kepler Mission. Light curves for target stars are extracted in the Kepler Science Data Processing Pipeline, and are searched for transiting planet signatures. A Threshold Crossing Event is generated in the transit search for targets where the transit detection threshold is exceeded and transit consistency checks are satisfied. These targets are subjected to further scrutiny in the Data Validation (DV) component of the Pipeline. Transiting planet candidates are characterized in DV, and light curves are searched for additional planets after transit signatures are modeled and removed. A suite of diagnostic tests is performed on all candidates to aid in discrimination between genuine transiting planets and instrumental or astrophysical false positives. Data products are generated per target and planet candidate to document and display transiting planet model fit and diagnostic test results. These products are exported to the Exoplanet Archive at the NASA Exoplanet Science Institute, and are available to the community. We describe the DV architecture and diagnostic tests, and provide a brief overview of the data products. Transiting planet modeling and the search for multiple planets on individual targets are described in a companion paper. The final revision of the Kepler Pipeline code base is available to the general public through GitHub. The Kepler Pipeline has also been modified to support the TESS Mission which will commence in 2018.
State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new au tomated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the texttt{autovet} code used to implement the algorithm publicly accessible. texttt{autovet} is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.
We present TRICERATOPS, a new Bayesian tool that can be used to vet and validate TESS Objects of Interest (TOIs). We test the tool on 68 TOIs that have been previously confirmed as planets or rejected as astrophysical false positives. By looking in t he false positive probability (FPP) -- nearby false positive probability (NFPP) plane, we define criteria that TOIs must meet to be classified as validated planets (FPP < 0.015 and NFPP < 10^-3), likely planets (FPP < 0.5 and NFPP < 10^-3), and likely nearby false positives (NFPP > 10^-1). We apply this procedure on 384 unclassified TOIs and statistically validate 12, classify 125 as likely planets, and classify 52 as likely nearby false positives. Of the 12 statistically validated planets, 9 are newly validated. TRICERATOPS is currently the only TESS vetting and validation tool that models transits from nearby contaminant stars in addition to the target star. We therefore encourage use of this tool to prioritize follow-up observations that confirm bona fide planets and identify false positives originating from nearby stars.
This paper presents SAILFISH, a scalable system for automatically finding state-inconsistency bugs in smart contracts. To make the analysis tractable, we introduce a hybrid approach that includes (i) a light-weight exploration phase that dramatically reduces the number of instructions to analyze, and (ii) a precise refinement phase based on symbolic evaluation guided by our novel value-summary analysis, which generates extra constraints to over-approximate the side effects of whole-program execution, thereby ensuring the precision of the symbolic evaluation. We developed a prototype of SAILFISH and evaluated its ability to detect two state-inconsistency flaws, viz., reentrancy and transaction order dependence (TOD) in Ethereum smart contracts. Further, we present detection rules for other kinds of smart contract flaws that SAILFISH can be extended to detect. Our experiments demonstrate the efficiency of our hybrid approach as well as the benefit of the value summary analysis. In particular, we show that S SAILFISH outperforms five state-of-the-art smart contract analyzers (SECURITY, MYTHRIL, OYENTE, SEREUM and VANDAL ) in terms of performance, and precision. In total, SAILFISH discovered 47 previously unknown vulnerable smart contracts out of 89,853 smart contracts from ETHERSCAN .
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا