No Arabic abstract
We present a novel population-based Bayesian inference approach to model the average and population variance of spatial distribution of a set of observables from ensemble analysis of low signal-to-noise ratio measurements. The method consists of (1) inferring the average profile using Gaussian Processes and (2) computing the covariance of the profile observables given a set of independent variables. Our model is computationally efficient and capable of inferring average profiles of a large population size from noisy measurements, without stacking and binning data nor parameterizing the shape of the mean profile. We demonstrate the performance of our method using dark matter, gas and stellar profiles extracted from hydrodynamical cosmological simulations of galaxy formation. Population Profile Estimator (PoPE) is publicly available in a GitHub repository. Our new method should be useful for measuring the spatial distribution and internal structure of a variety of astrophysical systems using large astronomical surveys.
In `A Bayesian Approach to Locating the Red Giant Branch Tip Magnitude (PART I), a new technique was introduced for obtaining distances using the TRGB standard candle. Here we describe a useful complement to the technique with the potential to further reduce the uncertainty in our distance measurements by incorporating a matched-filter weighting scheme into the model likelihood calculations. In this scheme, stars are weighted according to their probability of being true object members. We then re-test our modified algorithm using random-realization artificial data to verify the validity of the generated posterior probability distributions (PPDs) and proceed to apply the algorithm to the satellite system of M31, culminating in a 3D view of the system. Further to the distributions thus obtained, we apply a satellite-specific prior on the satellite distances to weight the resulting distance posterior distributions, based on the halo density profile. Thus in a single publication, using a single method, a comprehensive coverage of the distances to the companion galaxies of M31 is presented, encompassing the dwarf spheroidals Andromedas I - III, V, IX-XXVII and XXX along with NGC147, NGC 185, M33 and M31 itself. Of these, the distances to Andromeda XXIV - XXVII and Andromeda XXX have never before been derived using the TRGB. Object distances are determined from high-resolution tip magnitude posterior distributions generated using the Markov Chain Monte Carlo (MCMC) technique and associated sampling of these distributions to take into account uncertainties in foreground extinction and the absolute magnitude of the TRGB as well as photometric errors. The distance PPDs obtained for each object both with, and without the aforementioned prior are made available to the reader in tabular form...
The diversity of structures in the Universe (from the smallest galaxies to the largest superclusters) has formed under the pull of gravity from the tiny primordial perturbations that we see imprinted in the cosmic microwave background. A quantitative description of this process would require description of motion of zillions of dark matter particles. This impossible task is usually circumvented by coarse-graining the problem: one either considers a Newtonian dynamics of particles with macroscopically large masses or approximates the dark matter distribution with a continuous density field. There is no closed system of equations for the evolution of the matter density field alone and instead it should still be discretized at each timestep. In this work we describe a method of solving the full 6-dimensional Vlasov-Poisson equation via a system of auxiliary Schroedinger-like equations. The complexity of the problem gets shifted into the choice of the number and shape of the initial wavefunctions that should only be specified at the beginning of the computation (we stress that these wavefunctions have nothing to do with quantum nature of the actual dark matter particles). We discuss different prescriptions to generate the initial wave functions from the initial conditions and demonstrate the validity of the technique on two simple test cases. This new simulation algorithm can in principle be used on an arbitrary distribution function, enabling the simulation of warm and hot dark matter structure formation scenarios.
We present a new method of analysing and quantifying velocity structure in star forming regions suitable for the rapidly increasing quantity and quality of stellar position-velocity data. The method can be applied to data in any number of dimensions, does not require the centre or characteristic size (e.g. radius) of the region to be determined, and can be applied to regions with any underlying density and velocity structure. We test the method on a variety of example datasets and show it is robust with realistic observational uncertainties and selection effects. This method identifies velocity structures/scales in a region, and allows a direct comparison to be made between regions.
Dust emission is the main foreground for cosmic microwave background (CMB) polarization. Its statistical characterization must be derived from the analysis of observational data because the precision required for a reliable component separation is far greater than what is currently achievable with physical models of the turbulent magnetized interstellar medium. This letter takes a significant step toward this goal by proposing a method that retrieves non-Gaussian statistical characteristics of dust emission from noisy Planck polarization observations at 353 GHz. We devised a statistical denoising method based on wavelet phase harmonics (WPH) statistics, which characterize the coherent structures in non-Gaussian random fields and define a generative model of the data. The method was validated on mock data combining a dust map from a magnetohydrodynamic simulation and Planck noise maps. The denoised map reproduces the true power spectrum down to scales where the noise power is an order of magnitude larger than that of the signal. It remains highly correlated to the true emission and retrieves some of its non-Gaussian properties. Applied to Planck data, the method provides a new approach to building a generative model of dust polarization that will characterize the full complexity of the dust emission. We also release PyWPH, a public Python package, to perform GPU-accelerated WPH analyses on images.
Obtaining accurately calibrated redshift distributions of photometric samples is one of the great challenges in photometric surveys like LSST, Euclid, HSC, KiDS, and DES. We combine the redshift information from the galaxy photometry with constraints from two-point functions, utilizing cross-correlations with spatially overlapping spectroscopic samples. Our likelihood framework is designed to integrate directly into a typical large-scale structure and weak lensing analysis based on two-point functions. We discuss efficient and accurate inference techniques that allow us to scale the method to the large samples of galaxies to be expected in LSST. We consider statistical challenges like the parametrization of redshift systematics, discuss and evaluate techniques to regularize the sample redshift distributions, and investigate techniques that can help to detect and calibrate sources of systematic error using posterior predictive checks. We evaluate and forecast photometric redshift performance using data from the CosmoDC2 simulations, within which we mimic a DESI-like spectroscopic calibration sample for cross-correlations. Using a combination of spatial cross-correlations and photometry, we show that we can provide calibration of the mean of the sample redshift distribution to an accuracy of at least $0.002(1+z)$, consistent with the LSST-Y1 science requirements for weak lensing and large-scale structure probes.