ترغب بنشر مسار تعليمي؟ اضغط هنا

Vetting the optical transient candidates detected by the GWAC network using convolutional neural networks

190   0   0.0 ( 0 )
 نشر من قبل Damien Turpin
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

The observation of the transient sky through a multitude of astrophysical messengers hasled to several scientific breakthroughs these last two decades thanks to the fast evolution ofthe observational techniques and strategies employed by the astronomers. Now, it requiresto be able to coordinate multi-wavelength and multi-messenger follow-up campaign withinstruments both in space and on ground jointly capable of scanning a large fraction of thesky with a high imaging cadency and duty cycle. In the optical domain, the key challengeof the wide field of view telescopes covering tens to hundreds of square degrees is to dealwith the detection, the identification and the classification of hundreds to thousands of opticaltransient (OT) candidates every night in a reasonable amount of time. In the last decade, newautomated tools based on machine learning approaches have been developed to perform thosetasks with a low computing time and a high classification efficiency. In this paper, we presentan efficient classification method using Convolutional Neural Networks (CNN) to discard anybogus falsely detected in astrophysical images in the optical domain. We designed this toolto improve the performances of the OT detection pipeline of the Ground Wide field AngleCameras (GWAC) telescopes, a network of robotic telescopes aiming at monitoring the opticaltransient sky down to R=16 with a 15 seconds imaging cadency. We applied our trainedCNN classifier on a sample of 1472 GWAC OT candidates detected by the real-time detectionpipeline. It yields a good classification performance with 94% of well classified event and afalse positive rate of 4%.



قيم البحث

اقرأ أيضاً

Current synoptic sky surveys monitor large areas of the sky to find variable and transient astronomical sources. As the number of detections per night at a single telescope easily exceeds several thousand, current detection pipelines make intensive u se of machine learning algorithms to classify the detected objects and to filter out the most interesting candidates. A number of upcoming surveys will produce up to three orders of magnitude more data, which renders high-precision classification systems essential to reduce the manual and, hence, expensive vetting by human experts. We present an approach based on convolutional neural networks to discriminate between true astrophysical sources and artefacts in reference-subtracted optical images. We show that relatively simple networks are already competitive with state-of-the-art systems and that their quality can further be improved via slightly deeper networks and additional preprocessing steps -- eventually yielding models outperforming state-of-the-art systems. In particular, our best model correctly classifies about 97.3% of all real and 99.7% of all bogus instances on a test set containing 1,942 bogus and 227 real instances in total. Furthermore, the networks considered in this work can also successfully classify these objects at hand without relying on difference images, which might pave the way for future detection pipelines not containing image subtraction steps at all.
Large-scale sky surveys have played a transformative role in our understanding of astrophysical transients, only made possible by increasingly powerful machine learning-based filtering to accurately sift through the vast quantities of incoming data g enerated. In this paper, we present a new real-bogus classifier based on a Bayesian convolutional neural network that provides nuanced, uncertainty-aware classification of transient candidates in difference imaging, and demonstrate its application to the datastream from the GOTO wide-field optical survey. Not only are candidates assigned a well-calibrated probability of being real, but also an associated confidence that can be used to prioritise human vetting efforts and inform future model optimisation via active learning. To fully realise the potential of this architecture, we present a fully-automated training set generation method which requires no human labelling, incorporating a novel data-driven augmentation method to significantly improve the recovery of faint and nuclear transient sources. We achieve competitive classification accuracy (FPR and FNR both below 1%) compared against classifiers trained with fully human-labelled datasets, whilst being significantly quicker and less labour-intensive to build. This data-driven approach is uniquely scalable to the upcoming challenges and data needs of next-generation transient surveys. We make our data generation and model training codes available to the community.
238 - Yang Xu , Liping Xin , Jing Wang 2020
The ground-based wide-angle camera array (GWAC) generates millions of single frame alerts per night. After the complicated and elaborate filters by multiple methods, a couple of dozens of candidates are still needed to be confirmed by follow-up obser vations in real-time. In order to free scientists from the complex and high-intensity follow-up tasks, we developed a Real-time Automatic transient Validation System (RAVS), and introduce here its system architecture, data processing flow, database schema, automatic follow-up control flow, and mobile message notification solution. This system is capable of automatically carrying out all operations in real-time without human intervention, including the validation of transient candidates, the adaptive light-curve sampling for identified targets in multi-band, and the pushing of observation results to the mobile client. The running of RAVS shows that an M-type stellar flare event can be well sampled by RAVS without a significant loss of the details, while the observing time is only less than one-third of the time coverage. Because the control logic of RAVS is designed to be independent of the telescope hardware, RAVS can be conveniently transplanted to other telescopes, especially the follow-up system of SVOM. Some future improvements are presented for the adaptive light-curve sampling, after taking into account both the brightness of sources and the evolution trends of the corresponding light-curves.
Galaxy clusters appear as extended sources in XMM-Newton images, but not all extended sources are clusters. So, their proper classification requires visual inspection with optical images, which is a slow process with biases that are almost impossible to model. We tackle this problem with a novel approach, using convolutional neural networks (CNNs), a state-of-the-art image classification tool, for automatic classification of galaxy cluster candidates. We train the networks on combined XMM-Newton X-ray observations with their optical counterparts from the all-sky Digitized Sky Survey. Our data set originates from the X-CLASS survey sample of galaxy cluster candidates, selected by a specially developed pipeline, the XAmin, tailored for extended source detection and characterisation. Our data set contains 1 707 galaxy cluster candidates classified by experts. Additionally, we create an official Zooniverse citizen science project, The Hunt for Galaxy Clusters, to probe whether citizen volunteers could help in a challenging task of galaxy cluster visual confirmation. The project contained 1 600 galaxy cluster candidates in total of which 404 overlap with the experts sample. The networks were trained on expert and Zooniverse data separately. The CNN test sample contains 85 spectroscopically confirmed clusters and 85 non-clusters that appear in both data sets. Our custom network achieved the best performance in the binary classification of clusters and non-clusters, acquiring accuracy of 90 %, averaged after 10 runs. The results of using CNNs on combined X-ray and optical data for galaxy cluster candidate classification are encouraging and there is a lot of potential for future usage and improvements.
66 - R.D. Parsons , S. Ohm 2019
In this work, we present a new, high performance algorithm for background rejection in imaging atmospheric Cherenkov telescopes. We build on the already popular machine-learning techniques used in gamma-ray astronomy by the application of the latest techniques in machine learning, namely recurrent and convolutional neural networks, to the background rejection problem. Use of these machine-learning techniques addresses some of the key challenges encountered in the currently implemented algorithms and helps to significantly increase the background rejection performance at all energies. We apply these machine learning techniques to the H.E.S.S. telescope array, first testing their performance on simulated data and then applying the analysis to two well known gamma-ray sources. With real observational data we find significantly improved performance over the current standard methods, with a 20-25% reduction in the background rate when applying the recurrent neural network analysis. Importantly, we also find that the convolutional neural network results are strongly dependent on the sky brightness in the source region which has important implications for the future implementation of this method in Cherenkov telescope analysis.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا