Do you want to publish a course? Click here

Optimizing spectroscopic follow-up strategies for supernova photometric classification with active learning

81   0   0.0 ( 0 )
 Added by Emille E. O. Ishida
 Publication date 2018
  fields Physics
and research's language is English




Ask ChatGPT about the research

We report a framework for spectroscopic follow-up design for optimizing supernova photometric classification. The strategy accounts for the unavoidable mismatch between spectroscopic and photometric samples, and can be used even in the beginning of a new survey -- without any initial training set. The framework falls under the umbrella of active learning (AL), a class of algorithms that aims to minimize labelling costs by identifying a few, carefully chosen, objects which have high potential in improving the classifier predictions. As a proof of concept, we use the simulated data released after the Supernova Photometric Classification Challenge (SNPCC) and a random forest classifier. Our results show that, using only 12% the number of training objects in the SNPCC spectroscopic sample, this approach is able to double purity results. Moreover, in order to take into account multiple spectroscopic observations in the same night, we propose a semi-supervised batch-mode AL algorithm which selects a set of $N=5$ most informative objects at each night. In comparison with the initial state using the traditional approach, our method achieves 2.3 times higher purity and comparable figure of merit results after only 180 days of observation, or 800 queries (73% of the SNPCC spectroscopic sample size). Such results were obtained using the same amount of spectroscopic time necessary to observe the original SNPCC spectroscopic sample, showing that this type of strategy is feasible with current available spectroscopic resources. The code used in this work is available in the COINtoolbox: https://github.com/COINtoolbox/ActSNClass .



rate research

Read More

Automated photometric supernova classification has become an active area of research in recent years in light of current and upcoming imaging surveys such as the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope, given that spectroscopic confirmation of type for all supernovae discovered will be impossible. Here, we develop a multi-faceted classification pipeline, combining existing and new approaches. Our pipeline consists of two stages: extracting descriptive features from the light curves and classification using a machine learning algorithm. Our feature extraction methods vary from model-dependent techniques, namely SALT2 fits, to more independent techniques fitting parametric models to curves, to a completely model-independent wavelet approach. We cover a range of representative machine learning algorithms, including naive Bayes, k-nearest neighbors, support vector machines, artificial neural networks and boosted decision trees (BDTs). We test the pipeline on simulated multi-band DES light curves from the Supernova Photometric Classification Challenge. Using the commonly used area under the curve (AUC) of the Receiver Operating Characteristic as a metric, we find that the SALT2 fits and the wavelet approach, with the BDTs algorithm, each achieves an AUC of 0.98, where 1 represents perfect classification. We find that a representative training set is essential for good classification, whatever the feature set or algorithm, with implications for spectroscopic follow-up. Importantly, we find that by using either the SALT2 or the wavelet feature sets with a BDT algorithm, accurate classification is possible purely from light curve data, without the need for any redshift information.
Survey telescopes such as the Vera C. Rubin Observatory will increase the number of observed supernovae (SNe) by an order of magnitude, discovering millions of events; however, it is impossible to spectroscopically confirm the class for all the SNe discovered. Thus, photometric classification is crucial but its accuracy depends on the not-yet-finalized observing strategy of Rubin Observatorys Legacy Survey of Space and Time (LSST). We quantitatively analyze the impact of the LSST observing strategy on SNe classification using the simulated multi-band light curves from the Photometric LSST Astronomical Time-Series Classification Challenge (PLAsTiCC). First, we augment the simulated training set to be representative of the photometric redshift distribution per supernovae class, the cadence of observations, and the flux uncertainty distribution of the test set. Then we build a classifier using the photometric transient classification library snmachine, based on wavelet features obtained from Gaussian process fits, yielding similar performance to the winning PLAsTiCC entry. We study the classification performance for SNe with different properties within a single simulated observing strategy. We find that season length is an important factor, with light curves of 150 days yielding the highest classification performance. Cadence is also crucial for SNe classification; events with median inter-night gap of <3.5 days yield higher performance. Interestingly, we find that large gaps (>10 days) in light curve observations does not impact classification performance as long as sufficient observations are available on either side, due to the effectiveness of the Gaussian process interpolation. This analysis is the first exploration of the impact of observing strategy on photometric supernova classification with LSST.
We set out a simulation to explore the follow-up of exoplanet candidates. We look at comparing photometric (transit method) and spectroscopic (Doppler shift method) techniques using three instruments: NGTS, HARPS and CORALIE. We take into account precision of follow-up and required observing time in attempt to rank each method for a given set of planetary system parameters. The methods are assessed on two criteria, SNR of the detection and follow-up time before characterisation. We find that different follow-up techniques are preferred for different regions of parameter space. For SNR we find that the ratio of spectroscopic to photometric SNR for a given system goes like $R_p/P^{frac{1}{3}}$. For follow-up time we find that photometry is favoured for the shortest period systems ($<10$ d) as well as systems with small planet radii. Spectroscopy is then preferred for systems with larger radius, and thus more massive, planets (given our assumed mass-radius relationship). Finally, we attempt to account for availability of telescopes and weight the two methods accordingly.
83 - B. W. Miller 2019
Time domain and multi-messenger astrophysics are growing and important modes of observational astronomy that will help define astrophysics in the 2020s. Significant effort is being put into developing the components of a follow-up system for dynamically turning survey alerts into data. This system consists of: 1) brokers that will aggregate, classify, and filter alerts; 2) Target Observation Managers (TOMs) for prioritizing targets and managing observations and data; and 3) observatory interfaces, schedulers, and facilities along with data reduction software and science archives. These efforts need continued community support and funding in order to complete and maintain them. Many of the efforts can be community open-source software projects but they will benefit from the leadership of professional software developers. The coordination should be done by institutions that are involved in the follow-up system such as the national observatories (e.g. LSST/Gemini/NOAO Mid-scale/Community Science and Data Center) or a new MMA institute. These tools will help the community to produce the most science from new facilities and will provide new capabilities for all users of the facilities that adopt them.
The space missions TESS and PLATO plan to double the number of 4000 exoplanets already discovered and will measure the size of thousands of exoplanets around the brightest stars in the sky, allowing ground-based radial velocity spectroscopy follow-up to determine the orbit and mass of the detected planets. The new facility we are developing, MARVEL (Raskin et al. this conference), will enable the ground-based follow-up of large numbers of exoplanet detections, expected from TESS and PLATO, which cannot be carried out only by the current facilities that achieve the necessary radial velocity accuracy of 1 m/s or less. This paper presents the MARVEL observation strategy and performance analysis based on predicted PLATO transit detection yield simulations. The resulting observation scenario baseline will help in the instrument design choices and demonstrate the effectiveness of MARVEL as a TESS and PLATO science enabling facility.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا