No Arabic abstract
Cross-matching catalogues from radio surveys to catalogues of sources at other wavelengths is extremely hard, because radio sources are often extended, often consist of several spatially separated components, and often no radio component is coincident with the optical/infrared host galaxy. Traditionally, the cross-matching is done by eye, but this does not scale to the millions of radio sources expected from the next generation of radio surveys. We present an innovative automated procedure, using Bayesian hypothesis testing, that models trial radio-source morphologies with putative positions of the host galaxy. This new algorithm differs from an earlier version by allowing more complex radio source morphologies, and performing a simultaneous fit over a large field. We show that this technique performs well in an unsupervised mode.
We lay the foundations of a statistical framework for multi-catalogue cross-correlation and cross-identification based on explicit simplified catalogue models. A proper identification process should rely on both astrometric and photometric data. Under some conditions, the astrometric part and the photometric part can be processed separately and merged a posteriori to provide a single global probability of identification. The present paper addresses almost exclusively the astrometrical part and specifies the proper probabilities to be merged with photometric likelihoods. To select matching candidates in n catalogues, we used the Chi (or, indifferently, the Chi-square) test with 2(n-1) degrees of freedom. We thus call this cross-match a chi-match. In order to use Bayes formula, we considered exhaustive sets of hypotheses based on combinatorial analysis. The volume of the Chi-test domain of acceptance -- a 2(n-1)-dimensional acceptance ellipsoid -- is used to estimate the expected numbers of spurious associations. We derived priors for those numbers using a frequentist approach relying on simple geometrical considerations. Likelihoods are based on standard Rayleigh, Chi and Poisson distributions that we normalized over the Chi-test acceptance domain. We validated our theoretical results by generating and cross-matching synthetic catalogues. The results we obtain do not depend on the order used to cross-correlate the catalogues. We applied the formalism described in the present paper to build the multi-wavelength catalogues used for the science cases of the ARCHES (Astronomical Resource Cross-matching for High Energy Studies) project. Our cross-matching engine is publicly available through a multi-purpose web interface. In a longer term, we plan to integrate this tool into the CDS XMatch Service.
We describe a simple probabilistic method to cross-identify astrophysical sources from different catalogs and provide the probability that a source is associated with a source from another catalog or that it has no counterpart. When the positional uncertainty in one of the catalog is unknown, this method may be used to derive its typical value and even to study its dependence on the size of objects. It may also be applied when the true centers of a source and of its counterpart at another wavelength do not coincide. We extend this method to the case when there are only one-to-one associations between the catalogs.
We present the results of an approximately 6,100 square degree 104--196MHz radio sky survey performed with the Murchison Widefield Array during instrument commissioning between 2012 September and 2012 December: the Murchison Widefield Array Commissioning Survey (MWACS). The data were taken as meridian drift scans with two different 32-antenna sub-arrays that were available during the commissioning period. The survey covers approximately 20.5 h < Right Ascension (RA) < 8.5 h, -58 deg < Declination (Dec) < -14 deg over three frequency bands centred on 119, 150 and 180 MHz, with image resolutions of 6--3 arcmin. The catalogue has 3-arcmin angular resolution and a typical noise level of 40 mJy/beam, with reduced sensitivity near the field boundaries and bright sources. We describe the data reduction strategy, based upon mosaiced snapshots, flux density calibration and source-finding method. We present a catalogue of flux density and spectral index measurements for 14,110 sources, extracted from the mosaic, 1,247 of which are sub-components of complexes of sources.
PySE is a Python software package for finding and measuring sources in radio telescope images. The software was designed to detect sources in the LOFAR telescope images, but can be used with images from other radio telescopes as well. We introduce the LOFAR Telescope, the context within which PySE was developed, the design of PySE, and describe how it is used. Detailed experiments on the validation and testing of PySE are then presented, along with results of performance testing. We discuss some of the current issues with the algorithms implemented in PySE and their inter- action with LOFAR images, concluding with the current status of PySE and its future development.
Aspects ([asp{epsilon}], ASsociation PositionnellE/ProbabilistE de CaTalogues de Sources in French) is a Fortran 95 code for the cross-identification of astrophysical sources. Its source files are freely available. Given the coordinates and positional uncertainties of all the sources in two catalogs K and K, Aspects computes the probability that an object in K and one in K are the same or that they have no counterpart. Three exclusive assumptions are considered: (1) Several-to-one associations: a K-source has at most one counterpart in K, but a K-source may have several counterparts in K; (2) One-to-several associations: the same with K and K swapped; (3) One-to-one associations: a K-source has at most one counterpart in K and vice versa. To compute the probabilities of association, Aspects needs the a priori (i.e. ignoring positions) probability that an object has a counterpart. The code obtains estimates of this quantity by maximizing the likelihood to observe all the sources at their effective positions under each assumption. The likelihood may also be used to determine the most appropriate model, given the data, or to estimate the typical positional uncertainty if unknown.