No Arabic abstract
Cosmological surveys aim at answering fundamental questions about our Universe, including the nature of dark matter or the reason of unexpected accelerated expansion of the Universe. In order to answer these questions, two important ingredients are needed: 1) data from observations and 2) a theoretical model that allows fast comparison between observation and theory. Most of the cosmological surveys observe galaxies, which are very difficult to model theoretically due to the complicated physics involved in their formation and evolution; modeling realistic galaxies over cosmological volumes requires running computationally expensive hydrodynamic simulations that can cost millions of CPU hours. In this paper, we propose to use deep learning to establish a mapping between the 3D galaxy distribution in hydrodynamic simulations and its underlying dark matter distribution. One of the major challenges in this pursuit is the very high sparsity in the predicted galaxy distribution. To this end, we develop a two-phase convolutional neural network architecture to generate fast galaxy catalogues, and compare our results against a standard cosmological technique. We find that our proposed approach either outperforms or is competitive with traditional cosmological techniques. Compared to the common methods used in cosmology, our approach also provides a nice trade-off between time-consumption (comparable to fastest benchmark in the literature) and the quality and accuracy of the predicted simulation. In combination with current and upcoming data from cosmological observations, our method has the potential to answer fundamental questions about our Universe with the highest accuracy.
Cosmological simulations play an important role in the interpretation of astronomical data, in particular in comparing observed data to our theoretical expectations. However, to compare data with these simulations, the simulations in principle need to include gravity, magneto-hydrodyanmics, radiative transfer, etc. These ideal large-volume simulations (gravo-magneto-hydrodynamical) are incredibly computationally expensive which can cost tens of millions of CPU hours to run. In this paper, we propose a deep learning approach to map from the dark-matter-only simulation (computationally cheaper) to the galaxy distribution (from the much costlier cosmological simulation). The main challenge of this task is the high sparsity in the target galaxy distribution: space is mainly empty. We propose a cascade architecture composed of a classification filter followed by a regression procedure. We show that our result outperforms a state-of-the-art model used in the astronomical community, and provides a good trade-off between computational cost and prediction accuracy.
We present forecasts on the detectability of Ultra-light axion-like particles (ULAP) from future 21cm radio observations around the epoch of reionization (EoR). We show that the axion as the dominant dark matter component has a significant impact on the reionization history due to the suppression of small scale density perturbations in the early universe. This behavior depends strongly on the mass of the axion particle. Using numerical simulations of the brightness temperature field of neutral hydrogen over a large redshift range, we construct a suite of training data. This data is used to train a convolutional neural network that can build a connection between the spatial structures of the brightness temperature field and the input axion mass directly. We construct mock observations of the future Square Kilometer Array survey, SKA1-Low, and find that even in the presence of realistic noise and resolution constraints, the network is still able to predict the input axion mass. We find that the axion mass can be recovered over a wide mass range with a precision of approximately 20%, and as the whole DM contribution, the axion can be detected using SKA1-Low at 68% if the axion mass is $M_X<1.86 times10^{-20}$eV although this can decrease to $M_X<5.25 times10^{-21}$eV if we relax our assumptions on the astrophysical modeling by treating those astrophysical parameters as nuisance parameters.
We show that cold dark matter particles interacting through a Yukawa potential could naturally explain the recently observed cores in dwarf galaxies without affecting the dynamics of objects with a much larger velocity dispersion, such as clusters of galaxies. The velocity dependence of the associated cross-section as well as the possible exothermic nature of the interaction alleviates earlier concerns about strongly interacting dark matter. Dark matter evaporation in low-mass objects might explain the observed deficit of satellite galaxies in the Milky Way halo and have important implications for the first galaxies and reionization.
We present a new statistical method to determine the relationship between the stellar masses of galaxies and the masses of their host dark matter haloes over the entire cosmic history from z~4 to the present. This multi-epoch abundance matching (MEAM) model self-consistently takes into account that satellite galaxies first become satellites at times earlier than they are observed. We employ a redshift-dependent parameterization of the stellar-to-halo mass relation to populate haloes and subhaloes in the Millennium simulations with galaxies, requiring that the observed stellar mass functions at several redshifts be reproduced simultaneously. Using merger trees extracted from the dark matter simulations in combination with MEAM, we predict the average assembly histories of galaxies, separating into star formation within the galaxies (in-situ) and accretion of stars (ex-situ). The peak star formation efficiency decreases with redshift from 23% at z=0 to 9% at z=4 while the corresponding halo mass increases from 10^11.8Modot to 10^12.5Modot. The star formation rate of central galaxies peaks at a redshift which depends on halo mass; for massive haloes this peak is at early cosmic times while for low-mass galaxies the peak has not been reached yet. In haloes similar to that of the Milky-Way about half of the central stellar mass is assembled after z=0.7. In low-mass haloes, the accretion of satellites contributes little to the assembly of their central galaxies, while in massive haloes more than half of the central stellar mass is formed ex-situ with significant accretion of satellites at z<2. We find that our method implies a cosmic star formation history and an evolution of specific star formation rates which are consistent with those inferred directly. We present convenient fitting functions for stellar masses, star formation rates, and accretion rates as functions of halo mass and redshift.
Measuring the sum of the three active neutrino masses, $M_ u$, is one of the most important challenges in modern cosmology. Massive neutrinos imprint characteristic signatures on several cosmological observables in particular on the large-scale structure of the Universe. In order to maximize the information that can be retrieved from galaxy surveys, accurate theoretical predictions in the non-linear regime are needed. Currently, one way to achieve those predictions is by running cosmological numerical simulations. Unfortunately, producing those simulations requires high computational resources -- seven hundred CPU hours for each neutrino mass case. In this work, we propose a new method, based on a deep learning network (U-Net), to quickly generate simulations with massive neutrinos from standard $Lambda$CDM simulations without neutrinos. We computed multiple relevant statistical measures of deep-learning generated simulations, and conclude that our method accurately reproduces the 3-dimensional spatial distribution of matter down to non-linear scales: $k < 0.7$ h/Mpc. Finally, our method allows us to generate massive neutrino simulations 10,000 times faster than the traditional methods.