No Arabic abstract
We discuss a novel approach to identifying cosmic events in separate and independent observations. In our focus are the true events, such as supernova explosions, that happen once, hence, whose measurements are not repeatable. Their classification and analysis have to make the best use of all the available data. Bayesian hypothesis testing is used to associate streams of events in space and time. Probabilities are assigned to the matches by studying their rates of occurrence. A case study of Type Ia supernovae illustrates how to use lightcurves in the cross-identification process. Constraints from realistic lightcurves happen to be well-approximated by Gaussians in time, which makes the matching process very efficient. Model-dependent associations are computationally more demanding but can further boost our confidence.
We investigate the quality of associations of astronomical sources from multi-wavelength observations using simulated detections that are realistic in terms of their astrometric accuracy, small-scale clustering properties and selection functions. We present a general method to build such mock catalogs for studying associations, and compare the statistics of cross-identifications based on angular separation and Bayesian probability criteria. In particular, we focus on the highly relevant problem of cross-correlating the ultraviolet Galaxy Evolution Explorer (GALEX) and optical Sloan Digital Sky Survey (SDSS) surveys. Using refined simulations of the relevant catalogs, we find that the probability thresholds yield lower contamination of false associations, and are more efficient than angular separation. Our study presents a set of recommended criteria to construct reliable cross-match catalogs between SDSS and GALEX with minimal artifacts.
We present a general probabilistic formalism for cross-identifying astronomical point sources in multiple observations. Our Bayesian approach, symmetric in all observations, is the foundation of a unified framework for object matching, where not only spatial information, but physical properties, such as colors, redshift and luminosity, can also be considered in a natural way. We provide a practical recipe to implement an efficient recursive algorithm to evaluate the Bayes factor over a set of catalogs with known circular errors in positions. This new methodology is crucial for studies leveraging the synergy of todays multi-wavelength observations and to enter the time-domain science of the upcoming survey telescopes.
Probabilistic cross-identification has been successfully applied to a number of problems in astronomy from matching simple point sources to associating stars with unknown proper motions and even radio observations with realistic morphology. Here we study the Bayes factor for clustered objects and focus in particular on galaxies to assess the effect of typical angular correlations. Numerical calculations provide the modified relationship, which (as expected) suppresses the evidence for the associations at the shortest separations where the 2-point auto-correlation function is large. Ultimately this means that the matching probability drops at somewhat shorter scales than in previous models.
Cosmic voids and their corresponding redshift-aggregated projections of mass densities, known as troughs, play an important role in our attempt to model the large-scale structure of the Universe. Understanding these structures leads to tests comparing the standard model with alternative cosmologies, constraints on the dark energy equation of state, and provides evidence to differentiate among gravitational theories. In this paper, we extend the subspace-constrained mean shift algorithm, a recently introduced method to estimate density ridges, and apply it to 2D weak-lensing mass density maps from the Dark Energy Survey Y1 data release to identify curvilinear filamentary structures. We compare the obtained ridges with previous approaches to extract trough structure in the same data, and apply curvelets as an alternative wavelet-based method to constrain densities. We then invoke the Wasserstein distance between noisy and noiseless simulations to validate the denoising capabilities of our method. Our results demonstrate the viability of ridge estimation as a precursor for denoising weak lensing quantities to recover the large-scale structure, paving the way for a more versatile and effective search for troughs.
Modern astronomy increasingly relies upon systematic surveys, whose dedicated telescopes continuously observe the sky across varied wavelength ranges of the electromagnetic spectrum; some surveys also observe non-electromagnetic messengers, such as high-energy particles or gravitational waves. Stars and galaxies look different through the eyes of different instruments, and their independent measurements have to be carefully combined to provide a complete, sound picture of the multicolor and eventful universe. The association of an objects independent detections is, however, a difficult problem scientifically, computationally, and statistically, raising varied challenges across diverse astronomical applications. The fundamental problem is finding records in survey databases with directions that match to within the direction uncertainties. Such astronomic