ترغب بنشر مسار تعليمي؟ اضغط هنا

We introduce a new method to estimate the probability that an extragalactic transient source is associated with a candidate host galaxy. This approach relies solely on simple observables: sky coordinates and their uncertainties, galaxy fluxes and ang ular sizes. The formalism invokes Bayes rule to calculate the posterior probability P(O_i|x) from the galaxy prior P(O), observables x, and an assumed model for the true distribution of transients in/around their host galaxies. Using simulated transients placed in the well-studied COSMOS field, we consider several agnostic and physically motivated priors and offset distributions to explore the method sensitivity. We then apply the methodology to the set of 13~fast radio bursts (FRBs) localized with an uncertainty of several arcseconds. Our methodology finds nine of these are securely associated to a single host galaxy, P(O_i|x)>0.95. We examine the observed and intrinsic properties of these secure FRB hosts, recovering similar distributions as previous works. Furthermore, we find a strong correlation between the apparent magnitude of the securely identified host galaxies and the estimated cosmic dispersion measures of the corresponding FRBs, which results from the Macquart relation. Future work with FRBs will leverage this relation and other measures from the secure hosts as priors for future associations. The methodology is generic to transient type, localization error, and image quality. We encourage its application to other transients where host galaxy associations are critical to the science, e.g. gravitational wave events, gamma-ray bursts, and supernovae. We have encoded the technique in Python on GitHub: https://github.com/FRBs/astropath.
Cross-matching catalogues from radio surveys to catalogues of sources at other wavelengths is extremely hard, because radio sources are often extended, often consist of several spatially separated components, and often no radio component is coinciden t with the optical/infrared host galaxy. Traditionally, the cross-matching is done by eye, but this does not scale to the millions of radio sources expected from the next generation of radio surveys. We present an innovative automated procedure, using Bayesian hypothesis testing, that models trial radio-source morphologies with putative positions of the host galaxy. This new algorithm differs from an earlier version by allowing more complex radio source morphologies, and performing a simultaneous fit over a large field. We show that this technique performs well in an unsupervised mode.
Probabilistic cross-identification has been successfully applied to a number of problems in astronomy from matching simple point sources to associating stars with unknown proper motions and even radio observations with realistic morphology. Here we s tudy the Bayes factor for clustered objects and focus in particular on galaxies to assess the effect of typical angular correlations. Numerical calculations provide the modified relationship, which (as expected) suppresses the evidence for the associations at the shortest separations where the 2-point auto-correlation function is large. Ultimately this means that the matching probability drops at somewhat shorter scales than in previous models.
Observational astronomy in the time-domain era faces several new challenges. One of them is the efficient use of observations obtained at multiple epochs. The work presented here addresses faint object detection with multi-epoch data, and describes a n incremental strategy for separating real objects from artifacts in ongoing surveys, in situations where the single-epoch data are summaries of the full image data, such as single-epoch catalogs of flux and direction estimates for candidate sources. The basic idea is to produce low-threshold single-epoch catalogs, and use a probabilistic approach to accumulate catalog information across epochs; this is in contrast to more conventional strategies based on co-added or stacked image data across all epochs. We adopt a Bayesian approach, addressing object detection by calculating the marginal likelihoods for hypotheses asserting there is no object, or one object, in a small image patch containing at most one cataloged source at each epoch. The object-present hypothesis interprets the sources in a patch at different epochs as arising from a genuine object; the no-object (noise) hypothesis interprets candidate sources as spurious, arising from noise peaks. We study the detection probability for constant-flux objects in a simplified Gaussian noise setting, comparing results based on single exposures and stacked exposures to results based on a series of single-epoch catalog summaries. Computing the detection probability based on catalog data amounts to generalized cross-matching: it is the product of a factor accounting for matching of the estimated fluxes of candidate sources, and a factor accounting for matching of their estimated directions. We find that probabilistic fusion of multi-epoch catalog information can detect sources with only modest sacrifice in sensitivity and selectivity compared to stacking.
Modern astronomy increasingly relies upon systematic surveys, whose dedicated telescopes continuously observe the sky across varied wavelength ranges of the electromagnetic spectrum; some surveys also observe non-electromagnetic messengers, such as h igh-energy particles or gravitational waves. Stars and galaxies look different through the eyes of different instruments, and their independent measurements have to be carefully combined to provide a complete, sound picture of the multicolor and eventful universe. The association of an objects independent detections is, however, a difficult problem scientifically, computationally, and statistically, raising varied challenges across diverse astronomical applications. The fundamental problem is finding records in survey databases with directions that match to within the direction uncertainties. Such astronomic
One of the outstanding challenges of cross-identification is multiplicity: detections in crowded regions of the sky are often linked to more than one candidate associations of similar likelihoods. We map the resulting maximum likelihood partitioning to the fundamental assignment problem of discrete mathematics and efficiently solve the two-way catalog-level matching in the realm of combinatorial optimization using the so-called Hungarian algorithm. We introduce the method, demonstrate its performance in a mock universe where the true associations are known, and discuss the applicability of the new procedure to large surveys.
Not only source catalogs are extracted from astronomy observations. Their sky coverage is always carefully recorded and used in statistical analyses, such as correlation and luminosity function studies. Here we present a novel method for catalog matc hing, which inherently builds on the coverage information for better performance and completeness. A modified version of the Zones Algorithm is introduced for matching partially overlapping observations, where irrelevant parts of the data are excluded up front for efficiency. Our design enables searches to focus on specific areas on the sky to further speed up the process. Another important advantage of the new method over traditional techniques is its ability to quickly detect dropouts, i.e., the missing components that are in the observed regions of the celestial sphere but did not reach the detection limit in some observations. These often provide invaluable insight into the spectral energy distribution of the matched sources but rarely available in traditional associations.
73 - Tamas Budavari 2012
Object cross-identification in multiple observations is often complicated by the uncertainties in their astrometric calibration. Due to the lack of standard reference objects, an image with a small field of view can have significantly larger errors i n its absolute positioning than the relative precision of the detected sources within. We present a new general solution for the relative astrometry that quickly refines the World Coordinate System of overlapping fields. The efficiency is obtained through the use of infinitesimal 3-D rotations on the celestial sphere, which do not involve trigonometric functions. They also enable an analytic solution to an important step in making the astrometric corrections. In cases with many overlapping images, the correct identification of detections that match together across different images is difficult to determine. We describe a new greedy Bayesian approach for selecting the best object matches across a large number of overlapping images. The methods are developed and demonstrated on the Hubble Legacy Archive, one of the most challenging data sets today. We describe a novel catalog compiled from many Hubble Space Telescope observations, where the detections are combined into a searchable collection of matches that link the individual detections. The matches provide descriptions of astronomical objects involving multiple wavelengths and epochs. High relative positional accuracy of objects is achieved across the Hubble images, often sub-pixel precision in the order of just a few milli-arcseconds. The result is a reliable set of high-quality associations that are publicly available online.
We present a data model describing the structure of spectrophotometric datasets with spectral and temporal coordinates and associated metadata. This data model may be used to represent spectra, time series data, segments of SED (Spectral Energy Distr ibutions) and other spectral or temporal associations.
57 - Tamas Budavari 2011
We discuss a novel approach to identifying cosmic events in separate and independent observations. In our focus are the true events, such as supernova explosions, that happen once, hence, whose measurements are not repeatable. Their classification an d analysis have to make the best use of all the available data. Bayesian hypothesis testing is used to associate streams of events in space and time. Probabilities are assigned to the matches by studying their rates of occurrence. A case study of Type Ia supernovae illustrates how to use lightcurves in the cross-identification process. Constraints from realistic lightcurves happen to be well-approximated by Gaussians in time, which makes the matching process very efficient. Model-dependent associations are computationally more demanding but can further boost our confidence.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا