ترغب بنشر مسار تعليمي؟ اضغط هنا

PUMA: The Positional Update and Matching Algorithm

229   0   0.0 ( 0 )
 نشر من قبل Jack Line
 تاريخ النشر 2016
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

We present new software to cross-match low-frequency radio catalogues: the Positional Update and Matching Algorithm (PUMA). PUMA combines a positional Bayesian probabilistic approach with spectral matching criteria, allowing for confusing sources in the matching process. We go on to create a radio sky model using PUMA based on the Murchison Widefield Array Commissioning Survey, and are able to automatically cross-match 98.5% of sources. Using the characteristics of this sky model, we create simple simulated mock catalogues on which to test PUMA, and find that PUMA can reliably find the correct spectral indices of sources, along with being able to recover ionospheric offsets. Finally, we use this sky model to calibrate and remove foreground sources from simulated interferometric data, generated using OSKAR (the Oxford University visibility generator). We demonstrate that there is a substantial improvement in foreground source removal when using higher frequency and higher resolution source positions, even when correcting positions by an average of 0.3 given a synthesized beam-width of 2.3.



قيم البحث

اقرأ أيضاً

The Packed Ultra-wideband Mapping Array (PUMA) is a proposed low-resolution transit interferometric radio telescope operating over the frequency range 200 - 1100MHz. Its rich science portfolio will include measuring structure in the universe from red shift z = 0.3 to 6 using 21cm intensity mapping, detecting one million fast radio bursts, and monitoring thousands of pulsars. It will allow PUMA to advance science in three different areas of physics (the physics of dark energy, the physics of cosmic inflation and time-domain astrophysics). This document is a response to a request for information (RFI) by the Panel on Radio, Millimeter, and Submillimeter Observations from the Ground (RMS) of the Decadal Survey on Astronomy and Astrophysics 2020. We present the science case of PUMA, the development path and major risks to the project.
We propose a new pattern-matching algorithm for matching CCD images to a stellar catalogue based statistical method in this paper. The method of constructing star pairs can greatly reduce the computational complexity compared with the triangle method . We use a subsample of the brightest objects from the image and reference catalogue, and then find a coordinate transformation between the image and reference catalogue based on the statistical information of star pairs. Then all the objects are matched based on the initial plate solution. The matching process can be accomplished in several milliseconds for the observed images taken by Yunnan observatory 1-m telescope.
PUMA is a proposal for an ultra-wideband, low-resolution and transit interferometric radio telescope operating at $200-1100,mathrm{MHz}$. Its design is driven by six science goals which span three science themes: the physics of dark energy (measuring the expansion history and growth of the universe up to $z=6$), the physics of inflation (constraining primordial non-Gaussianity and primordial features) and the transient radio sky (detecting one million fast radio bursts and following up SKA-discovered pulsars). We propose two array configurations composed of hexagonally close-packed 6m dish arrangements with 50% fill factor. The initial 5,000 element petite array is scientifically compelling, and can act as a demonstrator and a stepping stone to the full 32,000 element full array. Viewed as a 21cm intensity mapping telescope, the program has the noise equivalent of a traditional spectroscopic galaxy survey comprised of 0.6 and 2.5 billion galaxies at a comoving wavenumber of $k=0.5,hmathrm{Mpc}^{-1}$ spanning the redshift range $z = 0.3 - 6$ for the petite and full configurations, respectively. At redshifts beyond $z=2$, the 21cm technique is a uniquely powerful way of mapping the universe, while the low-redshift range will allow for numerous cross-correlations with existing and upcoming surveys. This program is enabled by the development of ultra-wideband radio feeds, cost-effective dish construction methods, commodity radio-frequency electronics driven by the telecommunication industry and the emergence of sufficient computing power to facilitate real-time signal processing that exploits the full potential of massive radio arrays. The project has an estimated construction cost of 55 and 330 million FY19 USD for the petite and full array configurations. Including R&D, design, operations and science analysis, the cost rises to 125 and 600 million FY19 USD, respectively.
This paper presents new and efficient algorithms for matching stellar catalogues where the transformation between the coordinate systems of the two catalagoues is unknown and may include shearing. Finding a given object whether a star or asterism fro m the first catalogue in the second is logarithmic in time rather than polynomial, yielding a dramatic speed up relative to a naive implementation. Both acceleration of the matching algorithm and the ability to solve for arbitrary affine transformations not only will allow the registration of stellar catalogues and images that are now impossible to use but also will find applications in machine vision and other imaging applications.
We lay the foundations of a statistical framework for multi-catalogue cross-correlation and cross-identification based on explicit simplified catalogue models. A proper identification process should rely on both astrometric and photometric data. Unde r some conditions, the astrometric part and the photometric part can be processed separately and merged a posteriori to provide a single global probability of identification. The present paper addresses almost exclusively the astrometrical part and specifies the proper probabilities to be merged with photometric likelihoods. To select matching candidates in n catalogues, we used the Chi (or, indifferently, the Chi-square) test with 2(n-1) degrees of freedom. We thus call this cross-match a chi-match. In order to use Bayes formula, we considered exhaustive sets of hypotheses based on combinatorial analysis. The volume of the Chi-test domain of acceptance -- a 2(n-1)-dimensional acceptance ellipsoid -- is used to estimate the expected numbers of spurious associations. We derived priors for those numbers using a frequentist approach relying on simple geometrical considerations. Likelihoods are based on standard Rayleigh, Chi and Poisson distributions that we normalized over the Chi-test acceptance domain. We validated our theoretical results by generating and cross-matching synthetic catalogues. The results we obtain do not depend on the order used to cross-correlate the catalogues. We applied the formalism described in the present paper to build the multi-wavelength catalogues used for the science cases of the ARCHES (Astronomical Resource Cross-matching for High Energy Studies) project. Our cross-matching engine is publicly available through a multi-purpose web interface. In a longer term, we plan to integrate this tool into the CDS XMatch Service.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا