Do you want to publish a course? Click here

Low-redshift tests of Newtonian cosmologies with a time-varying gravitational constant

57   0   0.0 ( 0 )
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

In this work, we investigate Newtonian cosmologies with a time-varying gravitational constant, $G(t)$. We examine whether such models can reproduce the low-redshift cosmological observations without a cosmological constant, or any other sort of explicit dark energy fluid. Starting with a modified Newtons second law, where $G$ is taken as a function of time, we derive the first Friedmann--Lema{^i}tre equation, where a second parameter, $G^*$, appears as the gravitational constant. This parameter is related to the original $G$ from the second law, which remains in the acceleration equation. We use this approach to reproduce various cosmological scenarios that are studied in the literature, and we test these models with low-redshift probes: type-Ia supernovae (SNIa), baryon acoustic oscillations, and cosmic chronometers, taking also into account a possible change in the supernovae intrinsic luminosity with redshift. As a result, we obtain several models with similar $chi^2$ values as the standard $Lambda$CDM cosmology. When we allow for a redshift-dependence of the SNIa intrinsic luminosity, a model with a $G$ exponentially decreasing to zero while remaining positive (model 4) can explain the observations without acceleration. When we assume no redshift-dependence of SNIa, the observations favour a negative $G$ at large scales, while $G^*$ remains positive for most of these models. We conclude that these models offer interesting interpretations to the low-redshift cosmological observations, without needing a dark energy term.



rate research

Read More

The matter sound horizon can be inferred from the cosmic microwave background within the Standard Model. Independent direct measurements of the sound horizon are then a probe of possible deviations from the Standard Model. We aim at measuring the sound horizon $r_s$ from low-redshift indicators, which are completely independent of CMB inference. We used the measured product $H(z)r_s$ from baryon acoustic oscillations (BAO) together with supernovae~textsc{I}a to constrain $H(z)/H_{0}$ and time-delay lenses analysed by the H0LiCOW collaboration to anchor cosmological distances ($propto H_{0}^{-1}$). {Additionally, we investigated the influence of adding a sample of quasars with higher redshift with standardisable UV-Xray luminosity distances. We adopted polynomial expansions in $H(z)$ or in comoving distances} so that our inference was completely independent of any cosmological model on which the expansion history might be based. Our measurements are independent of Cepheids and systematics from peculiar motions {to within percent-level accuracy.} The inferred sound horizon $r_s$ varies between $(133 pm 8)$~Mpc and $(138 pm 5)$~Mpc across different models. The discrepancy with CMB measurements is robust against model choice. Statistical uncertainties are comparable to systematics. The combination of time-delay lenses, supernovae, and BAO yields a distance ladder that is independent of cosmology (and of Cepheid calibration) and a measurement of $r_s $ that is independent of the CMB. These cosmographic measurements are then a competitive test of the Standard Model, regardless of the hypotheses on which the cosmology is based.
Variation of the speed of light is quite a debated issue in cosmology with some benefits, but also with some controversial concerns. Many approaches to develop a consistent varying speed of light (VSL) theory have been developed recently. Although a lot of theoretical debate has sprout out about their feasibility and reliability, the most obvious and straightforward way to discriminate and check if such theories are really workable has been missed out or not fully employed. What is meant here is the comparison of these theories with observational data in a fully comprehensive way. In this paper we try to address this point i.e., by using the most updated cosmological probes, we test three different candidates for a VSL theory (Barrow & Magueijo, Avelino & Martins, and Moffat) signal. We consider many different ans{a}tze for both the functional form of $c(z)$ (which cannot be fixed by theoretical motivations) and for the dark energy dynamics, in order to have a clear global picture from which we extract the results. We compare these results using a reliable statistical tool such as the Bayesian Evidence. We find that the present cosmological data is perfectly compatible with any of these VSL scenarios, but in one case (Moffat model) we have a higher Bayesian Evidence ratio in favour of VSL than in the standard $c=$ constant $Lambda$CDM scenario. Moreover, in such a scenario, the VSL signal can help to strengthen constraints on the spatial curvature (with indication toward an open universe), to clarify some properties of dark energy (exclusion of a cosmological constant at $2sigma$ level) and is also falsifiable in the nearest future due to some peculiar issues which differentiate this model from the standard model. Finally, we have applied some priors which come from cosmology and, in particular, from information theory and gravitational thermodynamics.
65 - R. W. Kuhne 1999
Webb et al. presented preliminary evidence for a time-varying fine-structure constant. We show Tellers formula for this variation to be ruled out within the Einstein-de Sitter universe, however, it is compatible with cosmologies which require a large cosmological constant.
We present a new measurement of the Newtonian gravitational constant G based on cold atom interferometry. Freely falling samples of laser-cooled rubidium atoms are used in a gravity gradiometer to probe the field generated by nearby source masses. In addition to its potential sensitivity, this method is intriguing as gravity is explored by a quantum system. We report a value of G=6.667 10^{-11} m^{3} kg^{-1} s^{-2}, estimating a statistical uncertainty of $pm$ 0.011 10^{-11} m^{3} kg^{-1} s^{-2} and a systematic uncertainty of $pm$ 0.003 10^{-11} m^{3} kg^{-1} s^{-2}. The long-term stability of the instrument and the signal-to-noise ratio demonstrated here open interesting perspectives for pushing the measurement accuracy below the 100 ppm level.
The Gaussian process bandit is a problem in which we want to find a maximizer of a black-box function with the minimum number of function evaluations. If the black-box function varies with time, then time-varying Bayesian optimization is a promising framework. However, a drawback with current methods is in the assumption that the evaluation time for every observation is constant, which can be unrealistic for many practical applications, e.g., recommender systems and environmental monitoring. As a result, the performance of current methods can be degraded when this assumption is violated. To cope with this problem, we propose a novel time-varying Bayesian optimization algorithm that can effectively handle the non-constant evaluation time. Furthermore, we theoretically establish a regret bound of our algorithm. Our bound elucidates that a pattern of the evaluation time sequence can hugely affect the difficulty of the problem. We also provide experimental results to validate the practical effectiveness of the proposed method.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا