No Arabic abstract
We present GIGANTES, the most extensive and realistic void catalog suite ever released -- containing over 1 billion cosmic voids covering a volume larger than the observable Universe, more than 20 TB of data, and created by running the void finder VIDE on QUIJOTEs halo simulations. The expansive and detailed GIGANTES suite, spanning thousands of cosmological models, opens up the study of voids, answering compelling questions: Do voids carry unique cosmological information? How is this information correlated with galaxy information? Leveraging the large number of voids in the GIGANTES suite, our Fisher constraints demonstrate voids contain additional information, critically tightening constraints on cosmological parameters. We use traditional void summary statistics (void size function, void density profile) and the void auto-correlation function, which independently yields an error of $0.13,mathrm{eV}$ on $sum,m_{ u}$ for a 1 $h^{-3}mathrm{Gpc}^3$ simulation, without CMB priors. Combining halos and voids we forecast an error of $0.09,mathrm{eV}$ from the same volume. Extrapolating to next generation multi-Gpc$^3$ surveys such as DESI, Euclid, SPHEREx, and the Roman Space Telescope, we expect voids should yield an independent determination of neutrino mass. Crucially, GIGANTES is the first void catalog suite expressly built for intensive machine learning exploration. We illustrate this by training a neural network to perform likelihood-free inference on the void size function. Cosmology problems provide an impetus to develop novel deep learning techniques, leveraging the symmetries embedded throughout the universe from physical laws, interpreting models, and accurately predicting errors. With GIGANTES, machine learning gains an impressive dataset, offering unique problems that will stimulate new techniques.
We report novel cosmological constraints obtained from cosmic voids in the final BOSS DR12 dataset. They arise from the joint analysis of geometric and dynamic distortions of average void shapes (i.e., the stacked void-galaxy cross-correlation function) in redshift space. Our model uses tomographic deprojection to infer real-space void profiles and self-consistently accounts for the Alcock-Paczynski (AP) effect and redshift-space distortions (RSD) without any prior assumptions on cosmology or structure formation. It is derived from first physical principles and provides an extremely good description of the data at linear perturbation order. We validate this model with the help of mock catalogs and apply it to the final BOSS data to constrain the RSD and AP parameters $f/b$ and $D_AH/c$, where $f$ is the linear growth rate, $b$ the linear galaxy bias, $D_A$ the comoving angular diameter distance, $H$ the Hubble rate, and $c$ the speed of light. In addition, we include two nuisance parameters in our analysis to marginalize over potential systematics. We obtain $f/b=0.540pm0.091$ and $D_AH/c=0.588pm0.004$ from the full void sample at a mean redshift of $z=0.51$. In a flat $Lambda$CDM cosmology, this implies $Omega_mathrm{m}=0.312pm0.020$ for the present-day matter density parameter. When we use additional information from the survey mocks to calibrate our model, these constraints improve to $f/b=0.347pm0.023$, $D_AH/c=0.588pm0.003$, and $Omega_mathrm{m}=0.310pm0.017$. However, we emphasize that the calibration depends on the specific model of cosmology and structure formation assumed in the mocks, so the calibrated results should be considered less robust. Nevertheless, our calibration-independent constraints are among the tightest of their kind to date, demonstrating the immense potential of using cosmic voids for cosmology in current and future data.
We present a Python tool to generate a standard dataset from solar images that allows for user-defined selection criteria and a range of pre-processing steps. Our Python tool works with all image products from both the Solar and Heliospheric Observatory (SoHO) and Solar Dynamics Observatory (SDO) missions. We discuss a dataset produced from the SoHO missions multi-spectral images which is free of missing or corrupt data as well as planetary transits in coronagraph images, and is temporally synced making it ready for input to a machine learning system. Machine-learning-ready images are a valuable resource for the community because they can be used, for example, for forecasting space weather parameters. We illustrate the use of this data with a 3-5 day-ahead forecast of the north-south component of the interplanetary magnetic field (IMF) observed at Lagrange point one (L1). For this use case, we apply a deep convolutional neural network (CNN) to a subset of the full SoHO dataset and compare with baseline results from a Gaussian Naive Bayes classifier.
We reexamine big bang nucleosynthesis with large-scale baryon density inhomogeneities when the length scale of the density fluctuations exceeds the neutron diffusion length ($sim 10^7-10^8$ cm at BBN), and the amplitude of the fluctuations is sufficiently small to prevent gravitational collapse. In this limit, the final light element abundances can be determined by simply mixing the abundances from regions with different baryon/photon ratios without interactions. We examine gaussian, lognormal, and gamma distributions for the baryon/photon ratio, $eta $. We find that the deuterium and lithium-7 abundances increase with the RMS fluctuation in $eta $, while the effect on helium-4 is much smaller. We show that these increases in the deuterium and lithium-7 abundances are a consequence of Jensens inequality, and we derive analytic approximations for these abundances in the limit of small RMS fluctuations. Observational upper limits on the primordial deuterium abundance constrain the RMS fluctuation in $eta $ to be less than $17%$ of the mean value of $eta $. This provides us with a new limit on the graininess of the early universe.
We review the advanced version of the KKLT construction and pure $d=4$ de Sitter supergravity, involving a nilpotent multiplet, with regard to various conjectures that de Sitter state cannot exist in string theory. We explain why we consider these conjectures problematic and not well motivated, and why the recently proposed alternative string theory models of dark energy, ignoring vacuum stabilization, are ruled out by cosmological observations at least at the $3sigma$ level, i.e. with more than $99.7%$ confidence.
The Universe is mostly composed of large and relatively empty domains known as cosmic voids, whereas its matter content is predominantly distributed along their boundaries. The remaining material inside them, either dark or luminous matter, is attracted to these boundaries and causes voids to expand faster and to grow emptier over time. Using the distribution of galaxies centered on voids identified in the Sloan Digital Sky Survey and adopting minimal assumptions on the statistical motion of these galaxies, we constrain the average matter content $Omega_mathrm{m}=0.281pm0.031$ in the Universe today, as well as the linear growth rate of structure $f/b=0.417pm0.089$ at median redshift $bar{z}=0.57$, where $b$ is the galaxy bias ($68%$ C.L.). These values originate from a percent-level measurement of the anisotropic distortion in the void-galaxy cross-correlation function, $varepsilon = 1.003pm0.012$, and are robust to consistency tests with bootstraps of the data and simulated mock catalogs within an additional systematic uncertainty of half that size. They surpass (and are complementary to) existing constraints by unlocking cosmological information on smaller scales through an accurate model of nonlinear clustering and dynamics in void environments. As such, our analysis furnishes a powerful probe of deviations from Einsteins general relativity in the low-density regime which has largely remained untested so far. We find no evidence for such deviations in the data at hand.