No Arabic abstract
To study the impact of sparsity and galaxy bias on void statistics, we use a single large-volume, high-resolution N-body simulation to compare voids in multiple levels of subsampled dark matter, halo populations, and mock galaxies from a Halo Occupation Distribution model tuned to different galaxy survey densities. We focus our comparison on three key observational statistics: number functions, ellipticity distributions, and radial density profiles. We use the hierarchical tree structure of voids to interpret the impacts of sampling density and galaxy bias, and theoretical and empirical functions to describe the statistics in all our sample populations. We are able to make simple adjustments to theoretical expectations to offer prescriptions for translating from analytics to the void properties measured in realistic observations. We find that sampling density has a much larger effect on void sizes than galaxy bias. At lower tracer density, small voids disappear and the remaining voids are larger, more spherical, and have slightly steeper profiles. When a proper lower mass threshold is chosen, voids in halo distributions largely mimic those found in galaxy populations, except for ellipticities, where galaxy bias leads to higher values. We use the void density profile of Hamaus et al. (2014) to show that voids follow a self-similar and universal trend, allowing simple translations between voids studied in dark matter and voids identified in galaxy surveys. We have added the mock void catalogs used in this work to the Public Cosmic Void Catalog at http://www.cosmicvoids.net.
Survey observations of the three-dimensional locations of galaxies are a powerful approach to measure the distribution of matter in the universe, which can be used to learn about the nature of dark energy, physics of inflation, neutrino masses, etc. A competitive survey, however, requires a large volume (e.g., Vsurvey is roughly 10 Gpc3) to be covered, and thus tends to be expensive. A sparse sampling method offers a more affordable solution to this problem: within a survey footprint covering a given survey volume, Vsurvey, we observe only a fraction of the volume. The distribution of observed regions should be chosen such that their separation is smaller than the length scale corresponding to the wavenumber of interest. Then one can recover the power spectrum of galaxies with precision expected for a survey covering a volume of Vsurvey (rather than the volume of the sum of observed regions) with the number density of galaxies given by the total number of observed galaxies divided by Vsurvey (rather than the number density of galaxies within an observed region). We find that regularly-spaced sampling yields an unbiased power spectrum with no window function effect, and deviations from regularly-spaced sampling, which are unavoidable in realistic surveys, introduce calculable window function effects and increase the uncertainties of the recovered power spectrum. While we discuss the sparse sampling method within the context of the forthcoming Hobby-Eberly Telescope Dark Energy Experiment, the method is general and can be applied to other galaxy surveys.
We study the relationship between dark-matter haloes and matter in the MIP $N$-body simulation ensemble, which allows precision measurements of this relationship, even deeply into voids. What enables this is a lack of discreteness, stochasticity, and exclusion, achieved by averaging over hundreds of possible sets of initial small-scale modes, while holding fixed large-scale modes that give the cosmic web. We find (i) that dark-matter-halo formation is greatly suppressed in voids; there is an exponential downturn at low densities in the otherwise power-law matter-to-halo density bias function. Thus, the rarity of haloes in voids is akin to the rarity of the largest clusters, and their abundance is quite sensitive to cosmological parameters. The exponential downturn appears both in an excursion-set model, and in a model in which fluctuations evolve in voids as in an open universe with an effective $Omega_m$ proportional to a large-scale density. We also find that (ii) haloes typically populate the average halo-density field in a super-Poisson way, i.e. with a variance exceeding the mean; and (iii) the rank-order-Gaussianized halo and dark-matter fields are impressively similar in Fourier space. We compare both their power spectra and cross-correlation, supporting the conclusion that one is roughly a strictly-increasing mapping of the other. The MIP ensemble especially reveals how halo abundance varies with `environmental quantities beyond the local matter density; (iv) we find a visual suggestion that at fixed matter density, filaments are more populated by haloes than clusters.
Cosmic voids offer an extraordinary opportunity to study the effects of massive neutrinos on cosmological scales. Because they are freely streaming, neutrinos can penetrate the interior of voids more easily than cold dark matter or baryons, which makes their relative contribution to the mass budget in voids much higher than elsewhere in the Universe. In simulations it has recently been shown how various characteristics of voids in the matter distribution are affected by neutrinos, such as their abundance, density profiles, dynamics, and clustering properties. However, the tracers used to identify voids in observations (e.g., galaxies or halos) are affected by neutrinos as well, and isolating the unique neutrino signatures inherent to voids becomes more difficult. In this paper we make use of the DEMNUni suite of simulations to investigate the clustering bias of voids in Fourier space as a function of their core density and compensation. We find a clear dependence on the sum of neutrino masses that remains significant even for void statistics extracted from halos. In particular, we observe that the amplitude of the linear void bias increases with neutrino mass for voids defined in dark matter, whereas this trend gets reversed and slightly attenuated when measuring the relative void-halo bias using voids identified in the halo distribution. Finally, we argue how the original behaviour can be restored when considering observations of the total matter distribution (e.g. via weak lensing), and comment on scale-dependent effects in the void bias that may provide additional information on neutrinos in the future.
We compute the galaxy-galaxy correlation function of low-luminosity SDSS-DR7 galaxies $(-20 < M_{rm r} - 5log_{10}(h) < -18)$ inside cosmic voids identified in a volume limited sample of galaxies at $z=0.085$. To identify voids, we use bright galaxies with $M_{rm r} - 5log_{10}(h) < -20.0$. We find that structure in voids as traced by faint galaxies is mildly non-linear as compared with the general population of galaxies with similar luminosities. This implies a redshift-space correlation function with a similar shape than the real-space correlation albeit a normalization factor. The redshift space distortions of void galaxies allow to calculate pairwise velocity distributions which are consistent with an exponential model with a pairwise velocity dispersion of $w sim 50-70$ km/s, significantly lower than the global value of $w sim 500$ km/s. We also find that the internal structure of voids as traced by faint galaxies is independent of void environment, namely the correlation functions of galaxies residing in void-in-void or void-in-shell regions are identical within uncertainties. We have tested all our results with the semi-analytic catalogue MDPL2-textsc{Sag} finding a suitable agreement with the observations in all topics studied.
We present a novel method to significantly speed up cosmological parameter sampling. The method relies on constructing an interpolation of the CMB-log-likelihood based on sparse grids, which is used as a shortcut for the likelihood-evaluation. We obtain excellent results over a large region in parameter space, comprising about 25 log-likelihoods around the peak, and we reproduce the one-dimensional projections of the likelihood almost perfectly. In speed and accuracy, our technique is competitive to existing approaches to accelerate parameter estimation based on polynomial interpolation or neural networks, while having some advantages over them. In our method, there is no danger of creating unphysical wiggles as it can be the case for polynomial fits of a high degree. Furthermore, we do not require a long training time as for neural networks, but the construction of the interpolation is determined by the time it takes to evaluate the likelihood at the sampling points, which can be parallelised to an arbitrary degree. Our approach is completely general, and it can adaptively exploit the properties of the underlying function. We can thus apply it to any problem where an accurate interpolation of a function is needed.