Do you want to publish a course? Click here

Crossmatching variable objects with the Gaia data

64   0   0.0 ( 0 )
 Added by Lorenzo Rimoldini
 Publication date 2017
and research's language is English




Ask ChatGPT about the research

Tens of millions of new variable objects are expected to be identified in over a billion time series from the Gaia mission. Crossmatching known variable sources with those from Gaia is crucial to incorporate current knowledge, understand how these objects appear in the Gaia data, train supervised classifiers to recognise known classes, and validate the results of the Variability Processing and Analysis Coordination Unit (CU7) within the Gaia Data Analysis and Processing Consortium (DPAC). The method employed by CU7 to crossmatch variables for the first Gaia data release includes a binary classifier to take into account positional uncertainties, proper motion, targeted variability signals, and artefacts present in the early calibration of the Gaia data. Crossmatching with a classifier makes it possible to automate all those decisions which are typically made during visual inspection. The classifier can be trained with objects characterized by a variety of attributes to ensure similarity in multiple dimensions (astrometry, photometry, time-series features), with no need for a-priori transformations to compare different photometric bands, or of predictive models of the motion of objects to compare positions. Other advantages as well as some disadvantages of the method are discussed. Implementation steps from the training to the assessment of the crossmatch classifier and selection of results are described.



rate research

Read More

In astronomy, we are witnessing an enormous increase in the number of source detections, precision, and diversity of measurements. Additionally, multi-epoch data is becoming the norm, making time-series analyses an important aspect of current astronomy. The Gaia mission is an outstanding example of a multi-epoch survey that provides measurements in a large diversity of domains, with its broad-band photometry; spectrophotometry in blue and red (used to derive astrophysical parameters); spectroscopy (employed to infer radial velocities, v sin(i), and other astrophysical parameters); and its extremely precise astrometry. Most of all that information is provided for sources covering the entire sky. Here, we present several properties related to the Gaia time series, such as the time sampling; the different types of measurements; the Gaia G, G BP and G RP-band photometry; and Gaia-inspired studies using the CORrelation-RAdial-VELocities data to assess the potential of the information on the radial velocity, the FWHM, and the contrast of the cross-correlation function. We also present techniques (which are used or are under development) that optimize the extraction of astrophysical information from the different instruments of Gaia, such as the principal component analysis and the multi-response regression. The detailed understanding of the behavior of the observed phenomena in the various measurement domains can lead to richer and more precise characterization of the Gaia data, including the definition of more informative attributes that serve as input to (our) machine-learning algorithms.
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks, a probabilistic graphical model, that allows us to perform inference to pre- dict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilises sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches and at what computational cost. Integrating these catalogs with missing data we find that classification of variable objects improves by few percent and by 15% for quasar detection while keeping the computational cost the same.
Context. This paper presents an overview of the photometric data that are part of the first Gaia data release. Aims. The principles of the processing and the main characteristics of the Gaia photometric data are presented. Methods. The calibration strategy is outlined briefly and the main properties of the resulting photometry are presented. Results. Relations with other broadband photometric systems are provided. The overall precision for the Gaia photometry is shown to be at the milli-magnitude level and has a clear potential to improve further in future releases.
Gaia DR2 provides a unique all-sky catalogue of 550737 variable stars, of which 151761 are long-period variable (LPV) candidates with G variability amplitudes larger than 0.2 mag (5-95% quantile range). About one-fifth of the LPV candidates are Mira candidates, the majority of the rest are semi-regular variable candidates. For each source, G, BP , and RP photometric time-series are published, together with some LPV-specific attributes for the subset of 89617 candidates with periods in G longer than 60 days. We describe this first Gaia catalogue of LPV candidates, and present various validation checks. Various samples of LPVs were used to validate the catalogue: a sample of well-studied very bright LPVs with light curves from the AAVSO that are partly contemporaneous with Gaia light curves, a sample of Gaia LPV candidates with good parallaxes, the ASAS_SN catalogue of LPVs, and the OGLE catalogues of LPVs towards the Magellanic Clouds and the Galactic bulge. The analyses of these samples show a good agreement between Gaia DR2 and literature periods. The same is globally true for bolometric corrections of M-type stars. The main contaminant of our DR2 catalogue comes from young stellar objects (YSOs) in the solar vicinity (within ~1 kpc), although their number in the whole catalogue is only at the percent level. A cautionary note is provided about parallax-dependent LPV attributes published in the catalogue. This first Gaia catalogue of LPVs approximately doubles the number of known LPVs with amplitudes larger than 0.2 mag, despite the conservative candidate selection criteria that prioritise low contamination over high completeness, and despite the limited DR2 time coverage compared to the long periods characteristic of LPVs. It also contains a small set of YSO candidates, which offers the serendipitous opportunity to study these objects at an early stage of the Gaia data releases.
The second Gaia data release is based on 22 months of mission data with an average of 0.9 billion individual CCD observations per day. A data volume of this size and granularity requires a robust and reliable but still flexible system to achieve the demanding accuracy and precision constraints that Gaia is capable of delivering. The internal Gaia photometric system was initialised using an iterative process that is solely based on Gaia data. A set of calibrations was derived for the entire Gaia DR2 baseline and then used to produce the final mean source photometry. The photometric catalogue contains 2.5 billion sources comprised of three different grades depending on the availability of colour information and the procedure used to calibrate them: 1.5 billion gold, 144 million silver, and 0.9 billion bronze. These figures reflect the results of the photometric processing; the content of the data release will be different due to the validation and data quality filters applied during the catalogue preparation. The photometric processing pipeline, PhotPipe, implements all the processing and calibration workflows in terms of Map/Reduce jobs based on the Hadoop platform. This is the first example of a processing system for a large astrophysical survey project to make use of these technologies. The improvements in the generation of the integrated G-band fluxes, in the attitude modelling, in the cross-matching, and and in the identification of spurious detections led to a much cleaner input stream for the photometric processing. This, combined with the improvements in the definition of the internal photometric system and calibration flow, produced high-quality photometry. Hadoop proved to be an excellent platform choice for the implementation of PhotPipe in terms of overall performance, scalability, downtime, and manpower required for operations and maintenance.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا