No Arabic abstract
Self-consistent treatment of cosmological structure formation and expansion within the context of classical general relativity may lead to extra expansion above that expected in a structureless universe. We argue that in comparison to an early-epoch, extrapolated Einstein-de Sitter model, about 10-15% extra expansion is sufficient at the present to render superfluous the dark energy 68% contribution to the energy density budget, and that this is observationally realistic.
This paper makes two points. First, we show that the line-of-sight solution to cosmic microwave anisotropies in Fourier space, even though formally defined for arbitrarily large wavelengths, leads to position-space solutions which only depend on the sources of anisotropies inside the past light-cone of the observer. This happens order by order in a series expansion in powers of the visibility $gamma=e^{-mu}$, where $mu$ is the optical depth to Thompson scattering. We show that the CMB anisotropies are regulated by spacetime window functions which have support only inside the past light-cone of the point of observation. Second, we show that the Fourier-Bessel expansion of the physical fields (including the temperature and polarization momenta) is an alternative to the usual Fourier basis as a framework to compute the anisotropies. In that expansion, for each multipole $l$ there is a discrete tower of momenta $k_{i,l}$ (not a continuum) which can affect physical observables, with the smallest momenta being $k_{1,l} ~ l$. The Fourier-Bessel modes take into account precisely the information from the sources of anisotropies that propagates from the initial value surface to the point of observation - no more, no less. We also show that the physical observables (the temperature and polarization maps), and hence the angular power spectra, are unaffected by that choice of basis. This implies that the Fourier-Bessel expansion is the optimal scheme with which one can compute CMB anisotropies. (Abridged)
The model of holographic dark energy (HDE) with massive neutrinos and/or dark radiation is investigated in detail. The background and perturbation evolutions in the HDE model are calculated. We employ the PPF approach to overcome the gravity instability difficulty (perturbation divergence of dark energy) led by the equation-of-state parameter $w$ evolving across the phantom divide $w=-1$ in the HDE model with $c<1$. We thus derive the evolutions of density perturbations of various components and metric fluctuations in the HDE model. The impacts of massive neutrino and dark radiation on the CMB anisotropy power spectrum and the matter power spectrum in the HDE scenario are discussed. Furthermore, we constrain the models of HDE with massive neutrinos and/or dark radiation by using the latest measurements of expansion history and growth of structure, including the Planck CMB temperature data, the baryon acoustic oscillation data, the JLA supernova data, the Hubble constant direct measurement, the cosmic shear data of weak lensing, the Planck CMB lensing data, and the redshift space distortions data. We find that $sum m_ u<0.186$ eV (95% CL) and $N_{rm eff}=3.75^{+0.28}_{-0.32}$ in the HDE model from the constraints of these data.
We present a short (and necessarily incomplete) review of the evidence for the accelerated expansion of the Universe. The most direct probe of acceleration relies on the detailed study of supernovae (SN) of type Ia. Assuming that these are standardizable candles and that they fairly sample a homogeneous and isotropic Universe, the evidence for acceleration can be tested in a model- and calibration-independent way. Various light-curve fitting procedures have been proposed and tested. While several fitters give consistent results for the so-called Constitution set, they lead to inconsistent results for the recently released SDSS SN. Adopting the SALT fitter and relying on the Union set, cosmic acceleration is detected by a purely kinematic test at 7 sigma when spatial flatness is assumed and at 4 sigma without assumption on the spatial geometry. A weak point of the described method is the local set of SN (at z < 0.2), as these SN are essential to anchor the Hubble diagram. These SN are drawn from a volume much smaller than the Hubble volume and could be affected by local structure. Without the assumption of homogeneity, there is no evidence for acceleration, as the effects of acceleration are degenerate with the effects of inhomogeneities. Unless we sit in the centre of the Universe, such inhomogeneities can be constrained by SN observations by means of tests of the isotropy of the Hubble flow.
We study how to set the initial evolution of general cosmological fluctuations at second order, after neutrino decoupling. We compute approximate initial solutions for the transfer functions of all the relevant cosmological variables sourced by quadratic combinations of adiabatic and isocurvature modes. We perform these calculations in synchronous gauge, assuming a Universe described by the $Lambda$CDM model and composed of neutrinos, photons, baryons and dark matter. We highlight the importance of mixed modes, which are sourced by two different isocurvature or adiabatic modes and do not exist at the linear level. In particular, we investigate the so-called compensated isocurvature mode and find non-trivial initial evolution when it is mixed with the adiabatic mode, in contrast to the result at linear order and even at second order for the unmixed mode. Non-trivial evolution also arises when this compensated isocurvature is mixed with the neutrino density isocurvature mode. Regarding the neutrino velocity isocurvature mode, we show it unavoidably generates non-regular (decaying) modes at second order. Our results can be applied to second order Boltzmann solvers to calculate the effects of isocurvatures on non-linear observables.
Neural language models trained with a predictive or masked objective have proven successful at capturing short and long distance syntactic dependencies. Here, we focus on verb argument structure in German, which has the interesting property that verb arguments may appear in a relatively free order in subordinate clauses. Therefore, checking that the verb argument structure is correct cannot be done in a strictly sequential fashion, but rather requires to keep track of the arguments cases irrespective of their orders. We introduce a new probing methodology based on minimal variation sets and show that both Transformers and LSTM achieve a score substantially better than chance on this test. As humans, they also show graded judgments preferring canonical word orders and plausible case assignments. However, we also found unexpected discrepancies in the strength of these effects, the LSTMs having difficulties rejecting ungrammatical sentences containing frequent argument structure types (double nominatives), and the Transformers tending to overgeneralize, accepting some infrequent word orders or implausible sentences that humans barely accept.