ترغب بنشر مسار تعليمي؟ اضغط هنا

Varying fundamental constants principal component analysis: additional hints about the Hubble tension

164   0   0.0 ( 0 )
 نشر من قبل Luke Hart Mr.
 تاريخ النشر 2021
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Varying fundamental constants (VFC) [e.g., the fine-structure constant, $alpha_{rm EM}$] can arise in numerous extended cosmologies. Through their effect on the decoupling of baryons and photons during last scattering and reionisation, these models can be directly constrained using measurements of the cosmic microwave background (CMB) temperature and polarization anisotropies. Previous investigations focused mainly on time-independent changes to the values of fundamental constants. Here we generalize to time-dependent variations. Instead of directly studying various VFC parameterizations, we perform a model-independent principal component analysis (PCA), directly using an eigenmode decomposition of the varying constant during recombination. After developing the formalism, we use Planck 2018 data to obtain new VFC limits, showing that three independent VFC modes can be constrained at present. No indications for significant departures from the standard model are found with Planck data. Cosmic variance limited modes are also compared and simple forecasts for The Simons Observatory are carried out, showing that in the future improvements of the current constraints by a factor of $simeq 3$ can be anticipated. Our modes focus solely on VFC at redshifts $zgeq 300$. This implies that they do not capture some of the degrees of freedom relating to the reionisation era. This aspect provides important new insights into the possible origin of the Hubble tension, hinting that indeed a combined modification of recombination and reionisation physics could be at work. An extended PCA, covering both recombination and reionisation simultaneously, could shed more light on this question, as we emphasize here.

قيم البحث

اقرأ أيضاً

In a recent paper, we argued that systematic uncertainties related to the choice of Cepheid color-luminosity calibration may have a large influence on the tension between the Hubble constant as inferred from distances to Type Ia supernovae and the co smic microwave background as measured with the Planck satellite. Here, we investigate the impact of other sources of uncertainty in the supernova distance ladder, including Cepheid temperature and metallicity variations, supernova magnitudes and GAIA parallax distances. Excluding Milky Way Cepheids based on parallax calibration uncertainties, for the color excess calibration we obtain $H_0 = 70.8pm 2.1$ km/s/Mpc, in $1.6,sigma$ tension with the Planck value.
Using a semi-analytical model developed by Choudhury & Ferrara (2005) we study the observational constraints on reionization via a principal component analysis (PCA). Assuming that reionization at z>6 is primarily driven by stellar sources, we decomp ose the unknown function N_{ion}(z), representing the number of photons in the IGM per baryon in collapsed objects, into its principal components and constrain the latter using the photoionization rate obtained from Ly-alpha forest Gunn-Peterson optical depth, the WMAP7 electron scattering optical depth and the redshift distribution of Lyman-limit systems at z sim 3.5. The main findings of our analysis are: (i) It is sufficient to model N_{ion}(z) over the redshift range 2<z<14 using 5 parameters to extract the maximum information contained within the data. (ii) All quantities related to reionization can be severely constrained for z<6 because of a large number of data points whereas constraints at z>6 are relatively loose. (iii) The weak constraints on N_{ion}(z) at z>6 do not allow to disentangle different feedback models with present data. There is a clear indication that N_{ion}(z) must increase at z>6, thus ruling out reionization by a single stellar population with non-evolving IMF, and/or star-forming efficiency, and/or photon escape fraction. The data allows for non-monotonic N_{ion}(z) which may contain sharp features around z sim 7. (iv) The PCA implies that reionization must be 99% completed between 5.8<z<10.3 (95% confidence level) and is expected to be 50% complete at z approx 9.5-12. With future data sets, like those obtained by Planck, the z>6 constraints will be significantly improved.
We show how to efficiently project a vector onto the top principal components of a matrix, without explicitly computing these components. Specifically, we introduce an iterative algorithm that provably computes the projection using few calls to any b lack-box routine for ridge regression. By avoiding explicit principal component analysis (PCA), our algorithm is the first with no runtime dependence on the number of top principal components. We show that it can be used to give a fast iterative method for the popular principal component regression problem, giving the first major runtime improvement over the naive method of combining PCA with regression. To achieve our results, we first observe that ridge regression can be used to obtain a smooth projection onto the top principal components. We then sharpen this approximation to true projection using a low-degree polynomial approximation to the matrix step function. Step function approximation is a topic of long-term interest in scientific computing. We extend prior theory by constructing polynomials with simple iterative structure and rigorously analyzing their behavior under limited precision.
278 - L. Amendola 2011
We discuss methods based on Principal Component Analysis to constrain the dark energy equation of state using a combination of Type Ia supernovae at low redshift and spectroscopic measurements of varying fundamental couplings at higher redshifts. We discuss the performance of this method when future better-quality datasets are available, focusing on two forthcoming ESO spectrographs - ESPRESSO for the VLT and CODEX for the E-ELT - which include these measurements as a key part of their science cases. These can realize the prospect of a detailed characterization of dark energy properties almost all the way up to redshift 4.
The next generation of weak lensing surveys will trace the evolution of matter perturbations and gravitational potentials from the matter dominated epoch until today. Along with constraining the dynamics of dark energy, they will probe the relations between matter overdensities, local curvature, and the Newtonian potential. We work with two functions of time and scale to account for any modifications of these relations in the linear regime from those in the LCDM model. We perform a Principal Component Analysis (PCA) to find the eigenmodes and eigenvalues of these functions for surveys like DES and LSST. This paper builds on and significantly extends the PCA analysis of Zhao et al. (2009) in several ways. In particular, we consider the impact of some of the systematic effects expected in weak lensing surveys. We also present the PCA in terms of other choices of the two functions needed to parameterize modified growth on linear scales, and discuss their merits. We analyze the degeneracy between the modified growth functions and other cosmological parameters, paying special attention to the effective equation of state w(z). Finally, we demonstrate the utility of the PCA as an efficient data compression stage which enables one to easily derive constraints on parameters of specific models without recalculating Fisher matrices from scratch.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا