No Arabic abstract
We perform an analysis of the three-dimensional cosmic matter density field traced by galaxies of the SDSS-III/BOSS galaxy sample. The systematic-free nature of this analysis is confirmed by two elements: the successful cross-correlation with the gravitational lensing observations derived from Planck 2018 data and the absence of bias at scales $k simeq 10^{-3}-10^{-2}h$ Mpc$^{-1}$ in the a posteriori power spectrum of recovered initial conditions. Our analysis builds upon our algorithm for Bayesian Origin Reconstruction from Galaxies (BORG) and uses a physical model of cosmic structure formation to infer physically meaningful cosmic structures and their corresponding dynamics from deep galaxy observations. Our approach accounts for redshift-space distortions and light-cone effects inherent to deep observations. We also apply detailed corrections to account for known and unknown foreground contaminations, selection effects and galaxy biases. We obtain maps of residual, so far unexplained, systematic effects in the spectroscopic data of SDSS-III/BOSS. Our results show that unbiased and physically plausible models of the cosmic large scale structure can be obtained from present and next-generation galaxy surveys.
We present a self-consistent Bayesian formalism to sample the primordial density fields compatible with a set of dark matter density tracers after cosmic evolution observed in redshift space. Previous works on density reconstruction did not self-consistently consider redshift space distortions or included an additional iterative distortion correction step. We present here the analytic solution of coherent flows within a Hamiltonian Monte Carlo posterior sampling of the primordial density field. We test our method within the Zeldovich approximation, presenting also an analytic solution including tidal fields and spherical collapse on small scales using augmented Lagrangian perturbation theory. Our resulting reconstructed fields are isotropic and their power spectra are unbiased compared to the true one defined by our mock observations. Novel algorithmic implementations are introduced regarding the mass assignment kernels when defining the dark matter density field and optimization of the time step in the Hamiltonian equations of motions. Our algorithm, dubbed barcode, promises to be specially suited for analysis of the dark matter cosmic web down to scales of a few Megaparsecs. This large scale structure is implied by the observed spatial distribution of galaxy clusters --- such as obtained from X-ray, SZ or weak lensing surveys --- as well as that of the intergalactic medium sampled by the Lyman alpha forest or perhaps even by deep hydrogen intensity mapping. In these cases, virialized motions are negligible, and the tracers cannot be modeled as point-like objects. It could be used in all of these contexts as a baryon acoustic oscillation reconstruction algorithm.
We present a novel halo painting network that learns to map approximate 3D dark matter fields to realistic halo distributions. This map is provided via a physically motivated network with which we can learn the non-trivial local relation between dark matter density field and halo distributions without relying on a physical model. Unlike other generative or regressive models, a well motivated prior and simple physical principles allow us to train the mapping network quickly and with relatively little data. In learning to paint halo distributions from computationally cheap, analytical and non-linear density fields, we bypass the need for full particle mesh simulations and halo finding algorithms. Furthermore, by design, our halo painting network needs only local patches of dark matter density to predict the halos, and as such, it can predict the 3D halo distribution for any arbitrary simulation box size. Our neural network can be trained using small simulations and used to predict large halo distributions, as long as the resolutions are equivalent. We evaluate our models ability to generate 3D halo count distributions which reproduce, to a high degree, summary statistics such as the power spectrum and bispectrum, of the input or reference realizations.
Anisotropies of the cosmic microwave background (CMB) have proven to be a very powerful tool to constrain dark matter annihilation at the epoch of recombination. However, CMB constraints are currently derived using a number of reasonable but yet un-tested assumptions that could potentially lead to a misestimation of the true bounds. In this paper we examine the potential impact of these systematic effects. In particular, we separately study the propagation of the secondary particles produced by annihilation in two energy regimes; first following the shower from the initial particle energy to the keV scale, and then tracking the resulting secondary particles from this scale to the absorption of their energy as heat, ionization, or excitation of the medium. We improve both the high and low energy parts of the calculation, in particular finding that our more accurate treatment of losses to sub-10.2 eV photons produced by scattering of high-energy electrons weakens the constraints on particular DM annihilation models by up to a factor of two. On the other hand, we find that the uncertainties we examine for the low energy propagation do not significantly affect the results for current and upcoming CMB data. We include the evaluation of the precise amount of excitation energy, in the form of Lyman-alpha photons, produced by the propagation of the shower, and examine the effects of varying the Helium fraction and Helium ionization fraction. In the recent literature, simple approximations for the fraction of energy absorbed in different channels have often been used to derive CMB constraints: we assess the impact of using accurate versus approximate energy fractions. Finally we check that the choice of recombination code (between RECFAST v1.5 and COSMOREC), to calculate the evolution of the free electron fraction in the presence of dark matter annihilation, introduces negligible differences.
We address the problem of inferring the three-dimensional matter distribution from a sparse set of one-dimensional quasar absorption spectra of the Lyman-$alpha$ forest. Using a Bayesian forward modelling approach, we focus on extending the dynamical model to a fully self-consistent hierarchical field-level prediction of redshift-space quasar absorption sightlines. Our field-level approach rests on a recently developed semiclassical analogue to Lagrangian perturbation theory (LPT), which improves over noise problems and interpolation requirements of LPT. It furthermore allows for a manifestly conservative mapping of the optical depth to redshift space. In addition, this new dynamical model naturally introduces a coarse-graining scale, which we exploited to accelerate the Markov chain Monte-Carlo (MCMC) sampler using simulated annealing. By gradually reducing the effective temperature of the forward model, we were able to allow it to first converge on large spatial scales before the sampler became sensitive to the increasingly larger space of smaller scales. We demonstrate the advantages, in terms of speed and noise properties, of this field-level approach over using LPT as a forward model, and, using mock data, we validated its performance to reconstruct three-dimensional primordial perturbations and matter distribution from sparse quasar sightlines.
We present an analysis of the main systematic effects that could impact the measurement of CMB polarization with the proposed CORE space mission. We employ timeline-to-map simulations to verify that the CORE instrumental set-up and scanning strategy allow us to measure sky polarization to a level of accuracy adequate to the mission science goals. We also show how the CORE observations can be processed to mitigate the level of contamination by potentially worrying systematics, including intensity-to-polarization leakage due to bandpass mismatch, asymmetric main beams, pointing errors and correlated noise. We use analysis techniques that are well validated on data from current missions such as Planck to demonstrate how the residual contamination of the measurements by these effects can be brought to a level low enough not to hamper the scientific capability of the mission, nor significantly increase the overall error budget. We also present a prototype of the CORE photometric calibration pipeline, based on that used for Planck, and discuss its robustness to systematics, showing how CORE can achieve its calibration requirements. While a fine-grained assessment of the impact of systematics requires a level of knowledge of the system that can only be achieved in a future study phase, the analysis presented here strongly suggests that the main areas of concern for the CORE mission can be addressed using existing knowledge, techniques and algorithms.