ترغب بنشر مسار تعليمي؟ اضغط هنا

Multiwavelength classification of X-ray selected galaxy cluster candidates using convolutional neural networks

87   0   0.0 ( 0 )
 نشر من قبل Matej Kosiba
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Galaxy clusters appear as extended sources in XMM-Newton images, but not all extended sources are clusters. So, their proper classification requires visual inspection with optical images, which is a slow process with biases that are almost impossible to model. We tackle this problem with a novel approach, using convolutional neural networks (CNNs), a state-of-the-art image classification tool, for automatic classification of galaxy cluster candidates. We train the networks on combined XMM-Newton X-ray observations with their optical counterparts from the all-sky Digitized Sky Survey. Our data set originates from the X-CLASS survey sample of galaxy cluster candidates, selected by a specially developed pipeline, the XAmin, tailored for extended source detection and characterisation. Our data set contains 1 707 galaxy cluster candidates classified by experts. Additionally, we create an official Zooniverse citizen science project, The Hunt for Galaxy Clusters, to probe whether citizen volunteers could help in a challenging task of galaxy cluster visual confirmation. The project contained 1 600 galaxy cluster candidates in total of which 404 overlap with the experts sample. The networks were trained on expert and Zooniverse data separately. The CNN test sample contains 85 spectroscopically confirmed clusters and 85 non-clusters that appear in both data sets. Our custom network achieved the best performance in the binary classification of clusters and non-clusters, acquiring accuracy of 90 %, averaged after 10 runs. The results of using CNNs on combined X-ray and optical data for galaxy cluster candidate classification are encouraging and there is a lot of potential for future usage and improvements.



قيم البحث

اقرأ أيضاً

We present a simulation-based inference framework using a convolutional neural network to infer dynamical masses of galaxy clusters from their observed 3D projected phase-space distribution, which consists of the projected galaxy positions in the sky and their line-of-sight velocities. By formulating the mass estimation problem within this simulation-based inference framework, we are able to quantify the uncertainties on the inferred masses in a straightforward and robust way. We generate a realistic mock catalogue emulating the Sloan Digital Sky Survey (SDSS) Legacy spectroscopic observations (the main galaxy sample) for redshifts $z lesssim 0.09$ and explicitly illustrate the challenges posed by interloper (non-member) galaxies for cluster mass estimation from actual observations. Our approach constitutes the first optimal machine learning-based exploitation of the information content of the full 3D projected phase-space distribution, including both the virialized and infall cluster regions, for the inference of dynamical cluster masses. We also present, for the first time, the application of a simulation-based inference machinery to obtain dynamical masses of around $800$ galaxy clusters found in the SDSS Legacy Survey, and show that the resulting mass estimates are consistent with mass measurements from the literature.
Accurately and rapidly classifying exoplanet candidates from transit surveys is a goal of growing importance as the data rates from space-based survey missions increases. This is especially true for NASAs TESS mission which generates thousands of new candidates each month. Here we created the first deep learning model capable of classifying TESS planet candidates. We adapted the neural network model of Ansdell et al. (2018) to TESS data. We then trained and tested this updated model on 4 sectors of high-fidelity, pixel-level simulations data created using the Lilith simulator and processed using the full TESS SPOC pipeline. We find our model performs very well on our simulated data, with 97% average precision and 92% accuracy on planets in the 2-class model. This accuracy is also boosted by another ~4% if planets found at the wrong periods are included. We also performed 3- and 4-class classification of planets, blended & target eclipsing binaries, and non-astrophysical false positives, which have slightly lower average precision and planet accuracies, but are useful for follow-up decisions. When applied to real TESS data, 61% of TCEs coincident with currently published TOIs are recovered as planets, 4% more are suggested to be EBs, and we propose a further 200 TCEs as planet candidates.
The observation of the transient sky through a multitude of astrophysical messengers hasled to several scientific breakthroughs these last two decades thanks to the fast evolution ofthe observational techniques and strategies employed by the astronom ers. Now, it requiresto be able to coordinate multi-wavelength and multi-messenger follow-up campaign withinstruments both in space and on ground jointly capable of scanning a large fraction of thesky with a high imaging cadency and duty cycle. In the optical domain, the key challengeof the wide field of view telescopes covering tens to hundreds of square degrees is to dealwith the detection, the identification and the classification of hundreds to thousands of opticaltransient (OT) candidates every night in a reasonable amount of time. In the last decade, newautomated tools based on machine learning approaches have been developed to perform thosetasks with a low computing time and a high classification efficiency. In this paper, we presentan efficient classification method using Convolutional Neural Networks (CNN) to discard anybogus falsely detected in astrophysical images in the optical domain. We designed this toolto improve the performances of the OT detection pipeline of the Ground Wide field AngleCameras (GWAC) telescopes, a network of robotic telescopes aiming at monitoring the opticaltransient sky down to R=16 with a 15 seconds imaging cadency. We applied our trainedCNN classifier on a sample of 1472 GWAC OT candidates detected by the real-time detectionpipeline. It yields a good classification performance with 94% of well classified event and afalse positive rate of 4%.
Here we explore the efficiency and fidelity of a purely astrometric selection of quasars as point sources with zero proper motions in the {it Gaia} data release 2 (DR2). We have built a complete candidate sample including 104 Gaia-DR2 point sources b righter than $G<20$ mag within one degree of the north Galactic pole (NGP), all with proper motions consistent with zero within 2$sigma$ uncertainty. In addition to pre-existing spectra, we have secured long-slit spectroscopy of all the remaining candidates and find that all 104 stationary point sources in the field can be classified as either quasars (63) or stars (41). The selection efficiency of the zero-proper-motion criterion at high Galactic latitudes is thus $approx 60%$. Based on this complete quasar sample we examine the basic properties of the underlying quasar population within the imposed limiting magnitude. We find that the surface density of quasars is 20 deg$^{-2}$, the redshift distribution peaks at $zsim1.5$, and that only eight systems ($13^{+5}_{-3}%$) show significant dust reddening. We then explore the selection efficiency of commonly used optical, near- and mid-infrared quasar identification techniques and find that they are all complete at the $85-90%$ level compared to the astrometric selection. Finally, we discuss how the astrometric selection can be improved to an efficiency of $approx70%$ by including an additional cut requiring parallaxes of the candidates to be consistent with zero within 2$sigma$. The selection efficiency will further increase with the release of future, more sensitive astrometric measurement from the Gaia mission. This type of selection, purely based on the astrometry of the quasar candidates, is unbiased in terms of colours and emission mechanisms of the quasars and thus provides the most complete census of the quasar population within the limiting magnitude of Gaia.
Recently, deep convolutional neural networks have shown good results for image recognition. In this paper, we use convolutional neural networks with a finder module, which discovers the important region for recognition and extracts that region. We pr opose applying our method to the recognition of protein crystals for X-ray structural analysis. In this analysis, it is necessary to recognize states of protein crystallization from a large number of images. There are several methods that realize protein crystallization recognition by using convolutional neural networks. In each method, large-scale data sets are required to recognize with high accuracy. In our data set, the number of images is not good enough for training CNN. The amount of data for CNN is a serious issue in various fields. Our method realizes high accuracy recognition with few images by discovering the region where the crystallization drop exists. We compared our crystallization image recognition method with a high precision method using Inception-V3. We demonstrate that our method is effective for crystallization images using several experiments. Our method gained the AUC value that is about 5% higher than the compared method.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا