ترغب بنشر مسار تعليمي؟ اضغط هنا

A Morphological Classification Model to Identify Unresolved PanSTARRS1 Sources: Application in the ZTF Real-Time Pipeline

282   0   0.0 ( 0 )
 نشر من قبل Adam Miller
 تاريخ النشر 2019
  مجال البحث فيزياء
والبحث باللغة English
 تأليف Yutaro Tachibana




اسأل ChatGPT حول البحث

In the era of large photometric surveys, the importance of automated and accurate classification is rapidly increasing. Specifically, the separation of resolved and unresolved sources in astronomical imaging is a critical initial step for a wide array of studies, ranging from Galactic science to large scale structure and cosmology. Here, we present our method to construct a large, deep catalog of point sources utilizing Pan-STARRS1 (PS1) 3$pi$ survey data, which consists of $sim$3$times10^9$ sources with $mlesssim23.5,$mag. We develop a supervised machine-learning methodology, using the random forest (RF) algorithm, to construct the PS1 morphology model. We train the model using $sim$5$times10^4$ PS1 sources with HST COSMOS morphological classifications and assess its performance using $sim$4$times10^6$ sources with Sloan Digital Sky Survey (SDSS) spectra and $sim$2$times10^8$ textit{Gaia} sources. We construct 11 white flux features, which combine PS1 flux and shape measurements across 5 filters, to increase the signal-to-noise ratio relative to any individual filter. The RF model is compared to 3 alternative models, including the SDSS and PS1 photometric classification models, and we find that the RF model performs best. By number the PS1 catalog is dominated by faint sources ($mgtrsim21,$mag), and in this regime the RF model significantly outperforms the SDSS and PS1 models. For time-domain surveys, identifying unresolved sources is crucial for inferring the Galactic or extragalactic origin of new transients. We have classified $sim$1.5$times10^9$ sources using the RF model, and these results are used within the Zwicky Transient Facility real-time pipeline to automatically reject stellar sources from the extragalactic alert stream.

قيم البحث

اقرأ أيضاً

67 - A. A. Miller 2020
We present an update to the PanSTARRS-1 Point Source Catalog (PS1 PSC), which provides morphological classifications of PS1 sources. The original PS1 PSC adopted stringent detection criteria that excluded hundreds of millions of PS1 sources from the PSC. Here, we adapt the supervised machine learning methods used to create the PS1 PSC and apply them to different photometric measurements that are more widely available, allowing us to add $sim$144 million new classifications while expanding the the total number of sources in PS1 PSC by $sim$10%. We find that the new methodology, which utilizes PS1 forced photometry, performs $sim$6-8% worse than the original method. This slight degradation in performance is offset by the overall increase in the size of the catalog. The PS1 PSC is used by time-domain surveys to filter transient alert streams by removing candidates coincident with point sources that are likely to be Galactic in origin. The addition of $sim$144 million new classifications to the PS1 PSC will improve the efficiency with which transients are discovered.
Glitches are the observational manifestations of superfluidity inside neutron stars. The aim of this paper is to describe an automated glitch detection pipeline, which can alert the observers on possible real-time detection of rotational glitches in pulsars. Post alert, the pulsars can be monitored at a higher cadence to measure the post-glitch recovery phase. Two algorithms namely, Median Absolute Deviation (MAD) and polynomial regression have been explored to detect glitches in real time. The pipeline has been optimized with the help of simulated timing residuals for both the algorithms. Based on the simulations, we conclude that the polynomial regression algorithm is significantly more effective for real time glitch detection. The pipeline has been tested on a few published glitches. This pipeline is presently implemented at the Ooty Radio Telescope. In the era of upcoming large telescopes like SKA, several hundreds of pulsars will be observed regularly and such a tool will be useful for both real-time detection as well as optimal utilization of observation time for such glitching pulsars.
With growing data volumes from synoptic surveys, astronomers must become more abstracted from the discovery and introspection processes. Given the scarcity of follow-up resources, there is a particularly sharp onus on the frameworks that replace thes e human roles to provide accurate and well-calibrated probabilistic classification catalogs. Such catalogs inform the subsequent follow-up, allowing consumers to optimize the selection of specific sources for further study and permitting rigorous treatment of purities and efficiencies for population studies. Here, we describe a process to produce a probabilistic classification catalog of variability with machine learning from a multi-epoch photometric survey. In addition to producing accurate classifications, we show how to estimate calibrated class probabilities, and motivate the importance of probability calibration. We also introduce a methodology for feature-based anomaly detection, which allows discovery of objects in the survey that do not fit within the predefined class taxonomy. Finally, we apply these methods to sources observed by the All Sky Automated Survey (ASAS), and unveil the Machine-learned ASAS Classification Catalog (MACC), which is a 28-class probabilistic classification catalog of 50,124 ASAS sources. We estimate that MACC achieves a sub-20% classification error rate, and demonstrate that the class posterior probabilities are reasonably calibrated. MACC classifications compare favorably to the classifications of several previous domain-specific ASAS papers and to the ASAS Catalog of Variable Stars, which had classified only 24% of those sources into one of 12 science classes. The MACC is publicly available at http://www.bigmacc.info.
The Zwicky Transient Facility (ZTF) has been observing the entire northern sky since the start of 2018 down to a magnitude of 20.5 ($5 sigma$ for 30s exposure) in $g$, $r$, and $i$ filters. Over the course of two years, ZTF has obtained light curves of more than a billion sources, each with 50-1000 epochs per light curve in $g$ and $r$, and fewer in $i$. To be able to use the information contained in the light curves of variable sources for new scientific discoveries, an efficient and flexible framework is needed to classify them. In this paper, we introduce the methods and infrastructure which will be used to classify all ZTF light curves. Our approach aims to be flexible and modular and allows the use of a dynamical classification scheme and labels, continuously evolving training sets, and the use of different machine learning classifier types and architectures. With this setup, we are able to continuously update and improve the classification of ZTF light curves as new data becomes available, training samples are updated, and new classes need to be incorporated.
111 - B. Hasenberger , J. Alves 2019
Reconstructing 3D distributions from their 2D projections is a ubiquitous problem in various scientific fields, particularly so in observational astronomy. In this work, we present a new approach to solving this problem: a Vienna inverse-Abel-transfo rm based object reconstruction algorithm AVIATOR. The reconstruction that it performs is based on the assumption that the distribution along the line of sight is similar to the distribution in the plane of projection, which requires a morphological analysis of the structures in the projected image. The output of the AVIATOR algorithm is an estimate of the 3D distribution in the form of a reconstruction volume that is calculated without the problematic requirements that commonly occur in other reconstruction methods such as symmetry in the plane of projection or modelling of radial profiles. We demonstrate the robustness of the technique to different geometries, density profiles, and noise by applying the AVIATOR algorithm to several model objects. In addition, the algorithm is applied to real data: We reconstruct the density and temperature distributions of two dense molecular cloud cores and find that they are in excellent agreement with profiles reported in the literature. The AVIATOR algorithm is thus capable of reconstructing 3D distributions of physical quantities consistently using an intuitive set of assumptions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا