Do you want to publish a course? Click here

Low-redshift measurement of the sound horizon through gravitational time-delays

43   0   0.0 ( 0 )
 Added by Adriano Agnello
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The matter sound horizon can be inferred from the cosmic microwave background within the Standard Model. Independent direct measurements of the sound horizon are then a probe of possible deviations from the Standard Model. We aim at measuring the sound horizon $r_s$ from low-redshift indicators, which are completely independent of CMB inference. We used the measured product $H(z)r_s$ from baryon acoustic oscillations (BAO) together with supernovae~textsc{I}a to constrain $H(z)/H_{0}$ and time-delay lenses analysed by the H0LiCOW collaboration to anchor cosmological distances ($propto H_{0}^{-1}$). {Additionally, we investigated the influence of adding a sample of quasars with higher redshift with standardisable UV-Xray luminosity distances. We adopted polynomial expansions in $H(z)$ or in comoving distances} so that our inference was completely independent of any cosmological model on which the expansion history might be based. Our measurements are independent of Cepheids and systematics from peculiar motions {to within percent-level accuracy.} The inferred sound horizon $r_s$ varies between $(133 pm 8)$~Mpc and $(138 pm 5)$~Mpc across different models. The discrepancy with CMB measurements is robust against model choice. Statistical uncertainties are comparable to systematics. The combination of time-delay lenses, supernovae, and BAO yields a distance ladder that is independent of cosmology (and of Cepheid calibration) and a measurement of $r_s $ that is independent of the CMB. These cosmographic measurements are then a competitive test of the Standard Model, regardless of the hypotheses on which the cosmology is based.



rate research

Read More

Obtaining lensing time delay measurements requires long-term monitoring campaigns with a high enough resolution (< 1 arcsec) to separate the multiple images. In the radio, a limited number of high-resolution interferometer arrays make these observations difficult to schedule. To overcome this problem, we propose a technique for measuring gravitational time delays which relies on monitoring the total flux density with low-resolution but high-sensitivity radio telescopes to follow the variation of the brighter image. This is then used to trigger high-resolution observations in optimal numbers which then reveal the variation in the fainter image. We present simulations to assess the efficiency of this method together with a pilot project observing radio lens systems with the Westerbork Synthesis Radio Telescope (WSRT) to trigger Very Large Array (VLA) observations. This new method is promising for measuring time delays because it uses relatively small amounts of time on high-resolution telescopes. This will be important because instruments that have high sensitivity but limited resolution, together with an optimum usage of followup high-resolution observations from appropriate radio telescopes may in the future be useful for gravitational lensing time delay measurements by means of this new method.
The Hubble constant value is currently known to 10% accuracy unless assumptions are made for the cosmology (Sandage et al. 2006). Gravitational lens systems provide another probe of the Hubble constant using time delay measurements. However, current investigations of ~20 time delay lenses, albeit of varying levels of sophistication, have resulted in different values of the Hubble constant ranging from 50-80 km/s/Mpc. In order to reduce uncertainties, more time delay measurements are essential together with better determined mass models (Oguri 2007, Saha et al. 2006). We propose a more efficient technique for measuring time delays which does not require regular monitoring with a high-resolution interferometer array. The method uses double image and long-axis quadruple lens systems in which the brighter component varies first and dominates the total flux density. Monitoring the total flux density with low-resolution but high sensitivity radio telescopes provides the variation of the brighter image and is used to trigger high-resolution observations which can then be used to see the variation in the fainter image. We present simulations of this method together with a pilot project using the WSRT (Westerbork Radio Synthesis Telescope) to trigger VLA (Very Large Array) observations. This new method is promising for measuring time delays because it uses relatively small amounts of time on high-resolution telescopes. This will be important because many SKA pathfinder telescopes, such as MeerKAT (Karoo Array Telescope) and ASKAP (Australian Square Kilometre Array Pathfinder), have high sensitivity but limited resolution.
Strong lensing gravitational time delays are a powerful and cost effective probe of dark energy. Recent studies have shown that a single lens can provide a distance measurement with 6-7 % accuracy (including random and systematic uncertainties), provided sufficient data are available to determine the time delay and reconstruct the gravitational potential of the deflector. Gravitational-time delays are a low redshift (z~0-2) probe and thus allow one to break degeneracies in the interpretation of data from higher-redshift probes like the cosmic microwave background in terms of the dark energy equation of state. Current studies are limited by the size of the sample of known lensed quasars, but this situation is about to change. Even in this decade, wide field imaging surveys are likely to discover thousands of lensed quasars, enabling the targeted study of ~100 of these systems and resulting in substantial gains in the dark energy figure of merit. In the next decade, a further order of magnitude improvement will be possible with the 10000 systems expected to be detected and measured with LSST and Euclid. To fully exploit these gains, we identify three priorities. First, support for the development of software required for the analysis of the data. Second, in this decade, small robotic telescopes (1-4m in diameter) dedicated to monitoring of lensed quasars will transform the field by delivering accurate time delays for ~100 systems. Third, in the 2020s, LSST will deliver 1000s of time delays; the bottleneck will instead be the aquisition and analysis of high resolution imaging follow-up. Thus, the top priority for the next decade is to support fast high resolution imaging capabilities, such as those enabled by the James Webb Space Telescope and next generation adaptive optics systems on large ground based telescopes.
In this work, we investigate Newtonian cosmologies with a time-varying gravitational constant, $G(t)$. We examine whether such models can reproduce the low-redshift cosmological observations without a cosmological constant, or any other sort of explicit dark energy fluid. Starting with a modified Newtons second law, where $G$ is taken as a function of time, we derive the first Friedmann--Lema{^i}tre equation, where a second parameter, $G^*$, appears as the gravitational constant. This parameter is related to the original $G$ from the second law, which remains in the acceleration equation. We use this approach to reproduce various cosmological scenarios that are studied in the literature, and we test these models with low-redshift probes: type-Ia supernovae (SNIa), baryon acoustic oscillations, and cosmic chronometers, taking also into account a possible change in the supernovae intrinsic luminosity with redshift. As a result, we obtain several models with similar $chi^2$ values as the standard $Lambda$CDM cosmology. When we allow for a redshift-dependence of the SNIa intrinsic luminosity, a model with a $G$ exponentially decreasing to zero while remaining positive (model 4) can explain the observations without acceleration. When we assume no redshift-dependence of SNIa, the observations favour a negative $G$ at large scales, while $G^*$ remains positive for most of these models. We conclude that these models offer interesting interpretations to the low-redshift cosmological observations, without needing a dark energy term.
We simulate the effects of gravitational lensing on the source count of high redshift galaxies as projected to be observed by the Hubble Frontier Fields program and the James Webb Space Telescope (JWST) in the near future. Taking the mass density profile of the lensing object to be the singular isothermal sphere (SIS) or the Navarro-Frenk-White (NFW) profile, we model a lens residing at a redshift of z_L = 0.5 and explore the radial dependence of the resulting magnification bias and its variability with the velocity dispersion of the lens, the photometric sensitivity of the instrument, the redshift of the background source population, and the intrinsic maximum absolute magnitude (M_{max}) of the sources. We find that gravitational lensing enhances the number of galaxies with redshifts z >= 13 detected in the angular region theta_E/2 <= theta <= 2theta_E (where theta_E is the Einstein angle) by a factor of ~ 3 and 1.5 in the HUDF (df/d u_0 ~ 9 nJy) and medium-deep JWST surveys (df/d u_0 ~ 6 nJy). Furthermore, we find that even in cases where a negative magnification bias reduces the observed number count of background sources, the lensing effect improves the sensitivity of the count to the intrinsic faint-magnitude cut-off of the Schechter luminosity function. In a field centered on a strong lensing cluster, observations of z >= 6 and z >= 13 galaxies with JWST can be used to infer this cut-off magnitude for values as faint as M_{max} ~ -14.4 and -16.1 mag (L_{min} ~ 2.5*10^{26} and 1.2*10^{27} erg s^{-1} Hz^{-1}) respectively, within the range bracketed by existing theoretical models. Gravitational lensing may therefore offer an effective way of constraining the low-luminosity cut-off of high-redshift galaxies.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا