No Arabic abstract
We present a deep machine learning (ML)-based technique for accurately determining $sigma_8$ and $Omega_m$ from mock 3D galaxy surveys. The mock surveys are built from the AbacusCosmos suite of $N$-body simulations, which comprises 40 cosmological volume simulations spanning a range of cosmological models, and we account for uncertainties in galaxy formation scenarios through the use of generalized halo occupation distributions (HODs). We explore a trio of ML models: a 3D convolutional neural network (CNN), a power-spectrum-based fully connected network, and a hybrid approach that merges the two to combine physically motivated summary statistics with flexible CNNs. We describe best practices for training a deep model on a suite of matched-phase simulations and we test our model on a completely independent sample that uses previously unseen initial conditions, cosmological parameters, and HOD parameters. Despite the fact that the mock observations are quite small ($sim0.07h^{-3},mathrm{Gpc}^3$) and the training data span a large parameter space (6 cosmological and 6 HOD parameters), the CNN and hybrid CNN can constrain $sigma_8$ and $Omega_m$ to $sim3%$ and $sim4%$, respectively.
We present a large-scale Bayesian inference framework to constrain cosmological parameters using galaxy redshift surveys, via an application of the Alcock-Paczynski (AP) test. Our physical model of the non-linearly evolved density field, as probed by galaxy surveys, employs Lagrangian perturbation theory (LPT) to connect Gaussian initial conditions to the final density field, followed by a coordinate transformation to obtain the redshift space representation for comparison with data. We generate realizations of primordial and present-day matter fluctuations given a set of observations. This hierarchical approach encodes a novel AP test, extracting several orders of magnitude more information from the cosmological expansion compared to classical approaches, to infer cosmological parameters and jointly reconstruct the underlying 3D dark matter density field. The novelty of this AP test lies in constraining the comoving-redshift transformation to infer the appropriate cosmology which yields isotropic correlations of the galaxy density field, with the underlying assumption relying purely on the cosmological principle. Such an AP test does not rely explicitly on modelling the full statistics of the field. We verify in depth via simulations that this renders our test robust to model misspecification. This leads to another crucial advantage, namely that the cosmological parameters exhibit extremely weak dependence on the currently unresolved phenomenon of galaxy bias, thereby circumventing a potentially key limitation. This is consequently among the first methods to extract a large fraction of information from statistics other than that of direct density contrast correlations, without being sensitive to the amplitude of density fluctuations. We perform several statistical efficiency and consistency tests on a mock galaxy catalogue, using the SDSS-III survey as template.
We present a re-analysis of cosmic shear and galaxy clustering from first-year Dark Energy Survey data (DES Y1), making use of a Hybrid Effective Field Theory (HEFT) approach to model the galaxy-matter relation on weakly non-linear scales, initially proposed in Modi et al. (2020) (arXiv:1910.07097). This allows us to explore the enhancement in cosmological constraining power enabled by extending the galaxy clustering scale range typically used in projected large-scale structure analyses. Our analysis is based on a recomputed harmonic-space data vector and covariance matrix, carefully accounting for all sources of mode-coupling, non-Gaussianity and shot noise, which allows us to provide robust goodness-of-fit measures. We use the textsc{AbacusSummit} suite of simulations to build an emulator for the HEFT model predictions. We find that this model can explain the galaxy clustering and shear data up to wavenumbers $k_{rm max}sim 0.6, {rm Mpc}^{-1}$. We constrain $(S_8,Omega_m) = (0.786pm 0.020,0.273^{+0.030}_{-0.036})$ at the fiducial $k_{rm max}sim 0.3, {rm Mpc}^{-1}$, improving to $(S_8,Omega_m) = (0.786^{+0.015}_{-0.018},0.266^{+0.024}_{-0.027})$ at $k_{rm max}sim 0.5, {rm Mpc}^{-1}$. This represents a $sim10%$ and $sim35%$ improvement on the constraints derived respectively on both parameters using a linear bias relation on a reduced scale range ($k_{rm max}lesssim0.15,{rm Mpc}^{-1}$), in spite of the 15 additional parameters involved in the HEFT model. We investigate whether HEFT can be used to constrain the Hubble parameter and find $H_0= 70.7_{-3.5}^{+3.0},{rm km},s^{-1},{rm Mpc}^{-1}$. Our constraints are investigative and subject to certain caveats discussed in the text.
Deep learning is a powerful analysis technique that has recently been proposed as a method to constrain cosmological parameters from weak lensing mass maps. Due to its ability to learn relevant features from the data, it is able to extract more information from the mass maps than the commonly used power spectrum, and thus achieve better precision for cosmological parameter measurement. We explore the advantage of Convolutional Neural Networks (CNN) over the power spectrum for varying levels of shape noise and different smoothing scales applied to the maps. We compare the cosmological constraints from the two methods in the $Omega_M-sigma_8$ plane for sets of 400 deg$^2$ convergence maps. We find that, for a shape noise level corresponding to 8.53 galaxies/arcmin$^2$ and the smoothing scale of $sigma_s = 2.34$ arcmin, the network is able to generate 45% tighter constraints. For smaller smoothing scale of $sigma_s = 1.17$ the improvement can reach $sim 50 %$, while for larger smoothing scale of $sigma_s = 5.85$, the improvement decreases to 19%. The advantage generally decreases when the noise level and smoothing scales increase. We present a new training strategy to train the neural network with noisy data, as well as considerations for practical applications of the deep learning approach.
We investigate a new method to recover (if any) a possible varying speed of light (VSL) signal from cosmological data. It comes as an upgrade of [1,2], where it was argued that such signal could be detected at a single redshift location only. Here, we show how it is possible to extract information on a VSL signal on an extended redshift range. We use mock cosmological data from future galaxy surveys (BOSS, DESI, emph{WFirst-2.4} and SKA): the sound horizon at decoupling imprinted in the clustering of galaxies (BAO) as an angular diameter distance, and the expansion rate derived from those galaxies recognized as cosmic chronometers. We find that, given the forecast sensitivities of such surveys, a $sim1%$ VSL signal can be detected at $3sigma$ confidence level in the redshift interval $z in [0.,1.55]$. Smaller signals $(sim0.1%)$ will be hardly detected (even if some lower possibility for a $1sigma$ detection is still possible). Finally, we discuss the degeneration between a VSL signal and a non-null spatial curvature; we show that, given present bounds on curvature, any signal, if detected, can be attributed to a VSL signal with a very high confidence. On the other hand, our method turns out to be useful even in the classical scenario of a constant speed of light: in this case, the signal we reconstruct can be totally ascribed to spatial curvature and, thus, we might have a method to detect a $0.01$-order curvature in the same redhift range with a very high confidence.
The main energy-generating mechanisms in galaxies are black hole (BH) accretion and star formation (SF) and the interplay of these processes is driving the evolution of galaxies. MIR/FIR spectroscopy are able to distinguish between BH accretion and SF, as it was shown in the past by infrared spectroscopy from the space by the Infrared Space Observatory and Spitzer. Spitzer and Herschel spectroscopy together can trace the AGN and the SF components in galaxies, with extinction free lines, almost only in the local Universe, except for a few distant objects. One of the major goals of the study of galaxy evolution is to understand the history of the luminosity source of galaxies along cosmic time. This goal can be achieved with far-IR spectroscopic cosmological surveys. SPICA in combination with ground based large single dish submillimeter telescopes, such as CCAT, will offer a unique opportunity to do this. We use galaxy evolution models linked to the observed MIR-FIR counts (including Herschel) to predict the number of sources and their IR lines fluxes, as derived from observations of local galaxies. A shallow survey in an area of 0.5 square degrees, with a typical integration time of 1 hour per pointing, will be able to detect thousands of galaxies in at least three emission lines, using SAFARI, the far-IR spectrometer onboard of SPICA.