Do you want to publish a course? Click here

Automatic Catalog of RRLyrae from $sim$ 14 million VVV Light Curves: How far can we go with traditional machine-learning?

131   0   0.0 ( 0 )
 Added by Juan Cabral
 Publication date 2020
  fields Physics
and research's language is English




Ask ChatGPT about the research

The creation of a 3D map of the bulge using RRLyrae (RRL) is one of the main goals of the VVV(X) surveys. The overwhelming number of sources under analysis request the use of automatic procedures. In this context, previous works introduced the use of Machine Learning (ML) methods for the variable star classification. Our goal is the development and analysis of an automatic procedure, based on ML, for the identification of RRLs in the VVV Survey. This procedure will be use to generate reliable catalogs integrated over several tiles in the survey. After the reconstruction of light-curves, we extract a set of period and intensity-based features. We use for the first time a new subset of pseudo color features. We discuss all the appropriate steps needed to define our automatic pipeline: selection of quality measures; sampling procedures; classifier setup and model selection. As final result, we construct an ensemble classifier with an average Recall of 0.48 and average Precision of 0.86 over 15 tiles. We also make available our processed datasets and a catalog of candidate RRLs. Perhaps most interestingly, from a classification perspective based on photometric broad-band data, is that our results indicate that Color is an informative feature type of the RRL that should be considered for automatic classification methods via ML. We also argue that Recall and Precision in both tables and curves are high quality metrics for this highly imbalanced problem. Furthermore, we show for our VVV data-set that to have good estimates it is important to use the original distribution more than reduced samples with an artificial balance. Finally, we show that the use of ensemble classifiers helps resolve the crucial model selection step, and that most errors in the identification of RRLs are related to low quality observations of some sources or to the difficulty to resolve the RRL-C type given the date.



rate research

Read More

Possible inaccuracies in the determination of periods from short-term time series caused by disregard of the real course of light curves and instrumental trends are documented on the example of the period analysis of simulated TESS-like light curve by notorious Lomb-Scargle method.
The VISTA Variables in the Via Lactea (VVV) survey and its extension, have been monitoring about 560 square degrees of sky centred on the Galactic bulge and inner disc for nearly a decade. The photometric catalogue contains of order 10$^9$ sources monitored in the K$_s$ band down to 18 mag over hundreds of epochs from 2010-2019. Using these data we develop a decision tree classifier to identify microlensing events. As inputs to the tree, we extract a few physically motivated features as well as simple statistics ensuring a good fit to a microlensing model both on and off the event amplification. This produces a fast and efficient classifier trained on a set of simulated microlensing events and catacylsmic variables, together with flat baseline light curves randomly chosen from the VVV data. The classifier achieves 97 per cent accuracy in identifying simulated microlensing events in a validation set. We run the classifier over the VVV data set and then visually inspect the results, which produces a catalogue of 1,959 microlensing events. For these events, we provide the Einstein radius crossing time via a Bayesian analysis. The spatial dependence on recovery efficiency of our classifier is well characterised, and this allows us to compute spatially resolved completeness maps as a function of Einstein crossing time over the VVV footprint. We compare our approach to previous microlensing searches of the VVV. We highlight the importance of Bayesian fitting to determine the microlensing parameters for events with surveys like VVV with sparse data.
128 - James Benford 2010
How would observers differentiate Beacons from pulsars or other exotic sources, in light of likely Beacon observables? Bandwidth, pulse width and frequency may be distinguishing features. Such transients could be evidence of civilizations slightly higher than ourselves on the Kardashev scale.
What makes a task relatively more or less difficult for a machine compared to a human? Much AI/ML research has focused on expanding the range of tasks that machines can do, with a focus on whether machines can beat humans. Allowing for differences in scale, we can seek interesting (anomalous) pairs of tasks T, T. We define interesting in this way: The harder to learn relation is reversed when comparing human intelligence (HI) to AI. While humans seems to be able to understand problems by formulating rules, ML using neural networks does not rely on constructing rules. We discuss a novel approach where the challenge is to perform well under rules that have been created by human beings. We suggest that this provides a rigorous and precise pathway for understanding the difference between the two kinds of learning. Specifically, we suggest a large and extensible class of learning tasks, formulated as learning under rules. With these tasks, both the AI and HI will be studied with rigor and precision. The immediate goal is to find interesting groundtruth rule pairs. In the long term, the goal will be to understand, in a generalizable way, what distinguishes interesting pairs from ordinary pairs, and to define saliency behind interesting pairs. This may open new ways of thinking about AI, and provide unexpected insights into human learning.
How far can we use multi-wavelength cross-identifications to deconvolve far-infrared images? In this short research note I explore a test case of CLEAN deconvolutions of simulated confused 850 micron SCUBA-2 data, and explore the possible scientific applications of combining this data with ostensibly deeper TolTEC Large Scale Structure (LSS) survey 1.1mm-2mm data. I show that the SCUBA-2 can be reconstructed to the 1.1mm LMT resolution and achieve an 850 micron deconvolved sensitivity of 0.7 mJy RMS, an improvement of at least ~1:5x over naive point source filtered images. The TolTEC/SCUBA-2 combination can constrain cold (<10K) observed-frame colour temperatures, where TolTEC alone cannot.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا