No Arabic abstract
Dark energy is inferred from a Hubble expansion which is slower at epochs which are earlier than ours. But evidence reviewed here shows $H_0$ for nearby galaxies is actually less than currently adopted and would instead require {it deceleration} to reach the current value. Distances of Cepheid variables in galaxies in the Local Supercluster have been measured by the Hubble Space Telescope and it is argued here that they require a low value of $H_0$ along with redshifts which are at least partly intrinsic. The intrinsic component is hypothesized to be a result of the particle masses increasing with time. The same considerations apply to Dark Matter. But with particle masses growing with time, the condensation from plasmoid to proto galaxy not only does away with the need for unseen ``dark matter but also explains the intrinsic (non-velocity) redshifts of younger matter.
We investigate a generalized form of the phenomenologically emergent dark energy model, known as generalized emergent dark energy (GEDE), introduced by Li and Shafieloo [Astrophys. J. {bf 902}, 58 (2020)] in light of a series of cosmological probes and considering the evolution of the model at the level of linear perturbations. This model introduces a free parameter $Delta$ that can discriminate between the $Lambda$CDM (corresponds to $Delta=0$) or the phenomenologically emergent dark energy (PEDE) (corresponds to $Delta=1$) models, allowing us to determine which model is preferred most by the fit of the observational datasets. We find evidence in favor of the GEDE model for Planck alone and in combination with R19, while the Bayesian model comparison is inconclusive when Supernovae Type Ia or BAO data are included. In particular, we find that $Lambda$CDM model is disfavored at more than $2sigma$ CL for most of the observational datasets considered in this work and PEDE is in agreement with Planck 2018+BAO+R19 combination within $1sigma$ CL.
Holographic dark energy (HDE) describes the vacuum energy in a cosmic IR region whose total energy saturates the limit of avoiding the collapse into a black hole. HDE predicts that the dark energy equation of the state transiting from greater than the $-1$ regime to less than $-1$, accelerating the Universe slower at the early stage and faster at the late stage. We propose the HDE as a new {it physical} resolution to the Hubble constant discrepancy between the cosmic microwave background (CMB) and local measurements. With Planck CMB and galaxy baryon acoustic oscillation (BAO) data, we fit the HDE prediction of the Hubble constant as $H_0^{}!=, 71.54pm1.78,mathrm{km,s^{-1} Mpc^{-1}}$, consistent with local $H_0^{}$ measurements by LMC Cepheid Standards (R19) at $1.4sigma$ level. Combining Planck+BAO+R19, we find the HDE parameter $c=0.51pm0.02$ and $H_0^{}! = 73.12pm 1.14,mathrm{km ,s^{-1} Mpc^{-1}}$, which fits cosmological data at all redshifts. Future CMB and large-scale structure surveys will further test the holographic scenario.
Joint analysis of Cosmic Microwave Background, Baryon Acoustic Oscillation, and supernova data has enabled precision estimation of cosmological parameters. New programs will push to 1% uncertainty in the dark energy equation of state and tightened constraint on curvature, requiring close attention to systematics. Direct 1% measurement of the Hubble constant (H0) would provide a new constraint. It can be obtained without overlapping systematics directly from recessional velocities and geometric distance estimates for galaxies via the mapping of water maser emission that traces the accretion disks of nuclear black holes. We identify redshifts 0.02<z<0.06 as best for small samples, e.g., 10 widely distributed galaxies, each with 3% distance uncertainty. Knowledge of peculiar radial motion is also required. Mapping requires very long baseline interferometry (VLBI) with the finest angular resolution, sensitivity to individual lines of a few mJy-km/s, and baselines that can detect a complex of ~10 mJy lines (peak) in < 1 min. For 2010-2020, large ground apertures (50-100m diameter) augmenting the VLBA are critical, such as EVLA, GBT, Effelsberg, and the Large Millimeter Telescope, for which we propose a 22 GHz receiver and VLBI instrumentation. A space-VLBI aperture may be required, thus motivating US participation in the Japanese VSOP-2 mission (launch c.2013). This will provide 3-4x longer baselines and ~5x improvement in distance uncertainty. There are now 5 good targets at z>0.02, out of ~100 known masers. A single-dish discovery survey of >10,000 nuclei (>2500 hours on the GBT) would build a sample of tens of potential distance anchors. Beyond 2020, a high-frequency SKA could provide larger maser samples, enabling estimation of H0 from individually less accurate distances, and possibly without the need for peculiar motion corrections.
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give $H_0$ values of around 72-74km/s/Mpc , with typical errors of 2-3km/s/Mpc. This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68km/s/Mpc and typical errors of 1-2km/s/Mpc. The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
In this paper we fit two models of Early Dark Energy (EDE) (an increase in the expansion rate before recombination) to the combination of Atacama Space Telescope (ACT) measurements of the Cosmic Microwave Background (CMB) with data from either the WMAP or the Planck satellite, along with measurements of the baryon acoustic oscillations and uncalibrated supernovae luminosity distance. We study a phenomenological axion-like potential (axEDE) and a scalar field experiencing a first-order phase-transition (NEDE). We find that for both models the Planck-free analysis yields non-zero EDE at > 2 sigma and an increased value for $H_0 sim 70-74$ km/s/Mpc, compatible with local measurements, without the inclusion of any prior on $H_0$. On the other hand, the inclusion of Planck data restricts the EDE contribution to an upper-limit only at 95% C.L. For axEDE, the combination of Planck and ACT leads to constraints 30% weaker than with Planck alone, and there is no residual Hubble tension. On the other hand, NEDE is more strongly constrained in a Planck+ACT analysis, and the Hubble tension remains at $sim 3sigma$, illustrating the ability for CMB data to distinguish between EDE models. We explore the apparent inconsistency between the Planck and ACT data and find that it comes (mostly) from a slight tension between the temperature power spectrum at multipoles around $sim 1000$ and $sim 1500$. Finally, through a mock analysis of ACT data, we demonstrate that the preference for EDE is not driven by a lack of information at high-$ell$ when removing Planck data, and that a LCDM fit to the fiducial EDE cosmology results in a significant bias on ${H_0,omega_{rm cdm}}$. More accurate measurements of the TT power spectra above $ellsim 2500$ and EE between $ell sim 300-500$ will play a crucial role in differentiating EDE models.