ترغب بنشر مسار تعليمي؟ اضغط هنا

Machine Learning Based Real Bogus System for HSC-SSP Moving Object Detecting Pipeline

66   0   0.0 ( 0 )
 نشر من قبل Hsing-Wen Lin
 تاريخ النشر 2017
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Machine learning techniques are widely applied in many modern optical sky surveys, e.q. Pan-STARRS1, PTF/iPTF and Subaru/Hyper Suprime-Cam survey, to reduce human intervention for data verification. In this study, we have established a machine learning based real-bogus system to reject the false detections in the Subaru/Hyper-Suprime-Cam StrategicSurvey Program (HSC-SSP) source catalog. Therefore the HSC-SSP moving object detection pipeline can operate more effectively due to the reduction of false positives. To train the real-bogus system, we use the stationary sources as the real training set and the flagged data as the bogus set. The training set contains 47 features, most of which are photometric measurements and shape moments generated from the HSC image reduction pipeline (hscPipe). Our system can reach a true positive rate (tpr) ~96% with a false positive rate (fpr) ~ 1% or tpr ~99% at fpr ~5%. Therefore we conclude that the stationary sources are decent real training samples, and using photometry measurements and shape moments can reject the false positives effectively.


قيم البحث

اقرأ أيضاً

The Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP) is currently the deepest wide- field survey in progress. The 8.2 m aperture of Subaru telescope is very powerful in detect- ing faint/small moving objects, including near-Earth objects, asteroi ds, centaurs and Tran- Neptunian objects (TNOs). However, the cadence and dithering pattern of the HSC-SSP are not designed for detecting moving objects, making it difficult to do so systematically. In this paper, we introduce a new pipeline for detecting moving objects (specifically TNOs) in a non-dedicated survey. The HSC-SSP catalogs are re-arranged into the HEALPix architecture. Then, the stationary detections and false positive are removed with a machine learning al- gorithm to produce a list of moving object candidates. An orbit linking algorithm and visual inspections are executed to generate the final list of detected TNOs. The preliminary results of a search for TNOs using this new pipeline on data from the first HSC-SSP data release (Mar 2014 to Nov 2015) are also presented.
Astronomers require efficient automated detection and classification pipelines when conducting large-scale surveys of the (optical) sky for variable and transient sources. Such pipelines are fundamentally important, as they permit rapid follow-up and analysis of those detections most likely to be of scientific value. We therefore present a deep learning pipeline based on the convolutional neural network architecture called $texttt{MeerCRAB}$. It is designed to filter out the so called bogus detections from true astrophysical sources in the transient detection pipeline of the MeerLICHT telescope. Optical candidates are described using a variety of 2D images and numerical features extracted from those images. The relationship between the input images and the target classes is unclear, since the ground truth is poorly defined and often the subject of debate. This makes it difficult to determine which source of information should be used to train a classification algorithm. We therefore used two methods for labelling our data (i) thresholding and (ii) latent class model approaches. We deployed variants of $texttt{MeerCRAB}$ that employed different network architectures trained using different combinations of input images and training set choices, based on classification labels provided by volunteers. The deepest network worked best with an accuracy of 99.5$%$ and Matthews correlation coefficient (MCC) value of 0.989. The best model was integrated to the MeerLICHT transient vetting pipeline, enabling the accurate and efficient classification of detected transients that allows researchers to select the most promising candidates for their research goals.
Efficient automated detection of flux-transient, reoccurring flux-variable, and moving objects is increasingly important for large-scale astronomical surveys. We present braai, a convolutional-neural-network, deep-learning real/bogus classifier desig ned to separate genuine astrophysical events and objects from false positive, or bogus, detections in the data of the Zwicky Transient Facility (ZTF), a new robotic time-domain survey currently in operation at the Palomar Observatory in California, USA. Braai demonstrates a state-of-the-art performance as quantified by its low false negative and false positive rates. We describe the open-source software tools used internally at Caltech to archive and access ZTFs alerts and light curves (Kowalski), and to label the data (Zwickyverse). We also report the initial results of the classifier deployment on the Edge Tensor Processing Units (TPUs) that show comparable performance in terms of accuracy, but in a much more (cost-) efficient manner, which has significant implications for current and future surveys.
We present the procedure to build and validate the bright-star masks for the Hyper-Suprime-Cam Strategic Subaru Proposal (HSC-SSP) survey. To identify and mask the saturated stars in the full HSC-SSP footprint, we rely on the Gaia and Tycho-2 star ca talogues. We first assemble a pure star catalogue down to $G_{rm Gaia} < 18$ after removing $sim1.5%$ of sources that appear extended in the Sloan Digital Sky Survey (SDSS). We perform visual inspection on the early data from the S16A internal release of HSC-SSP, finding that our star catalogue is $99.2%$ pure down to $G_{rm Gaia} < 18$. Second, we build the mask regions in an automated way using stacked detected source measurements around bright stars binned per $G_{rm Gaia}$ magnitude. Finally, we validate those masks from visual inspection and comparison with the literature of galaxy number counts and angular two-point correlation functions. This version (Arcturus) supersedes the previous version (Sirius) used in the S16A internal and DR1 public releases. We publicly release the full masks and tools to flag objects in the entire footprint of the planned HSC-SSP observations at this address: ftp://obsftp.unige.ch/pub/coupon/brightStarMasks/HSC-SSP/.
The advancement of technology has resulted in a rapid increase in supernova (SN) discoveries. The Subaru/Hyper Suprime-Cam (HSC) transient survey, conducted from fall 2016 through spring 2017, yielded 1824 SN candidates. This gave rise to the need fo r fast type classification for spectroscopic follow-up and prompted us to develop a machine learning algorithm using a deep neural network (DNN) with highway layers. This machine is trained by actual observed cadence and filter combinations such that we can directly input the observed data array into the machine without any interpretation. We tested our model with a dataset from the LSST classification challenge (Deep Drilling Field). Our classifier scores an area under the curve (AUC) of 0.996 for binary classification (SN Ia or non-SN Ia) and 95.3% accuracy for three-class classification (SN Ia, SN Ibc, or SN II). Application of our binary classification to HSC transient data yields an AUC score of 0.925. With two weeks of HSC data since the first detection, this classifier achieves 78.1% accuracy for binary classification, and the accuracy increases to 84.2% with the full dataset. This paper discusses the potential use of machine learning for SN type classification purposes.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا