ترغب بنشر مسار تعليمي؟ اضغط هنا

Transient-optimised real-bogus classification with Bayesian Convolutional Neural Networks -- sifting the GOTO candidate stream

84   0   0.0 ( 0 )
 نشر من قبل Thomas Killestein
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Large-scale sky surveys have played a transformative role in our understanding of astrophysical transients, only made possible by increasingly powerful machine learning-based filtering to accurately sift through the vast quantities of incoming data generated. In this paper, we present a new real-bogus classifier based on a Bayesian convolutional neural network that provides nuanced, uncertainty-aware classification of transient candidates in difference imaging, and demonstrate its application to the datastream from the GOTO wide-field optical survey. Not only are candidates assigned a well-calibrated probability of being real, but also an associated confidence that can be used to prioritise human vetting efforts and inform future model optimisation via active learning. To fully realise the potential of this architecture, we present a fully-automated training set generation method which requires no human labelling, incorporating a novel data-driven augmentation method to significantly improve the recovery of faint and nuclear transient sources. We achieve competitive classification accuracy (FPR and FNR both below 1%) compared against classifiers trained with fully human-labelled datasets, whilst being significantly quicker and less labour-intensive to build. This data-driven approach is uniquely scalable to the upcoming challenges and data needs of next-generation transient surveys. We make our data generation and model training codes available to the community.



قيم البحث

اقرأ أيضاً

Efficient automated detection of flux-transient, reoccurring flux-variable, and moving objects is increasingly important for large-scale astronomical surveys. We present braai, a convolutional-neural-network, deep-learning real/bogus classifier desig ned to separate genuine astrophysical events and objects from false positive, or bogus, detections in the data of the Zwicky Transient Facility (ZTF), a new robotic time-domain survey currently in operation at the Palomar Observatory in California, USA. Braai demonstrates a state-of-the-art performance as quantified by its low false negative and false positive rates. We describe the open-source software tools used internally at Caltech to archive and access ZTFs alerts and light curves (Kowalski), and to label the data (Zwickyverse). We also report the initial results of the classifier deployment on the Edge Tensor Processing Units (TPUs) that show comparable performance in terms of accuracy, but in a much more (cost-) efficient manner, which has significant implications for current and future surveys.
Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive u se of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional preprocessing steps -- eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3% of all real and 99.7% of all bogus instances on a test set containing 1,942 bogus and 227 real instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.
The advent of wide-field sky surveys has led to the growth of transient and variable source discoveries. The data deluge produced by these surveys has necessitated the use of machine learning (ML) and deep learning (DL) algorithms to sift through the vast incoming data stream. A problem that arises in real-world applications of learning algorithms for classification is imbalanced data, where a class of objects within the data is underrepresented, leading to a bias for over-represented classes in the ML and DL classifiers. We present a recurrent neural network (RNN) classifier that takes in photometric time-series data and additional contextual information (such as distance to nearby galaxies and on-sky position) to produce real-time classification of objects observed by the Gravitational-wave Optical Transient Observer (GOTO), and use an algorithm-level approach for handling imbalance with a focal loss function. The classifier is able to achieve an Area Under the Curve (AUC) score of 0.972 when using all available photometric observations to classify variable stars, supernovae, and active galactic nuclei. The RNN architecture allows us to classify incomplete light curves, and measure how performance improves as more observations are included. We also investigate the role that contextual information plays in producing reliable object classification.
The observation of the transient sky through a multitude of astrophysical messengers hasled to several scientific breakthroughs these last two decades thanks to the fast evolution ofthe observational techniques and strategies employed by the astronom ers. Now, it requiresto be able to coordinate multi-wavelength and multi-messenger follow-up campaign withinstruments both in space and on ground jointly capable of scanning a large fraction of thesky with a high imaging cadency and duty cycle. In the optical domain, the key challengeof the wide field of view telescopes covering tens to hundreds of square degrees is to dealwith the detection, the identification and the classification of hundreds to thousands of opticaltransient (OT) candidates every night in a reasonable amount of time. In the last decade, newautomated tools based on machine learning approaches have been developed to perform thosetasks with a low computing time and a high classification efficiency. In this paper, we presentan efficient classification method using Convolutional Neural Networks (CNN) to discard anybogus falsely detected in astrophysical images in the optical domain. We designed this toolto improve the performances of the OT detection pipeline of the Ground Wide field AngleCameras (GWAC) telescopes, a network of robotic telescopes aiming at monitoring the opticaltransient sky down to R=16 with a 15 seconds imaging cadency. We applied our trainedCNN classifier on a sample of 1472 GWAC OT candidates detected by the real-time detectionpipeline. It yields a good classification performance with 94% of well classified event and afalse positive rate of 4%.
Astronomers require efficient automated detection and classification pipelines when conducting large-scale surveys of the (optical) sky for variable and transient sources. Such pipelines are fundamentally important, as they permit rapid follow-up and analysis of those detections most likely to be of scientific value. We therefore present a deep learning pipeline based on the convolutional neural network architecture called $texttt{MeerCRAB}$. It is designed to filter out the so called bogus detections from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope. Optical candidates are described using a variety of 2D images and numerical features extracted from those images. The relationship between the input images and the target classes is unclear, since the ground truth is poorly defined and often the subject of debate. This makes it difficult to determine which source of information should be used to train a classification algorithm. We therefore used two methods for labelling our data (i) thresholding and (ii) latent class model approaches. We deployed variants of $texttt{MeerCRAB}$ that employed different network architectures trained using different combinations of input images and training set choices, based on classification labels provided by volunteers. The deepest network worked best with an accuracy of 99.5$%$ and Matthews correlation coefficient (MCC) value of 0.989. The best model was integrated to the MeerLICHT transient vetting pipeline, enabling the accurate and efficient classification of detected transients that allows researchers to select the most promising candidates for their research goals.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا