Do you want to publish a course? Click here

Cosmography by orthogonalized logarithmic polynomials

110   0   0.0 ( 0 )
 Added by Giada Bargiacchi
 Publication date 2021
  fields Physics
and research's language is English




Ask ChatGPT about the research

Cosmography is a powerful tool to investigate the Universe kinematic and then to reconstruct dynamics in a model-independent way. However, recent new measurements of supernovae Ia and quasars have populated the Hubble diagram up to high redshifts ($z sim 7.5$) and the application of the traditional cosmographic approach has become less straightforward due to the large redshifts implied. Here we investigate this issue through an expansion of the luminosity distance-redshift relation in terms of orthogonal logarithmic polynomials. In particular we point out the advantages of a new procedure of orthogonalization and we show that such an expansion provides a very good fit in the whole $z=0div 7.5$ range to both real and mock data obtained assuming various cosmological models. Moreover, despite of the fact that the cosmographic series is tested well beyond its convergence radius, the parameters obtained expanding the luminosity distance - redshift relation for the $Lambda$CDM model are broadly consistent with the results from a fit of mock data obtained with the same cosmological model. This provides a method to test the reliability of a cosmographic function to study cosmological models at high redshifts and it demonstrates that the logarithmic polynomial series can be used to test the consistency of the $Lambda$CDM model with the current Hubble diagram of quasars and supernovae Ia. We confirm a strong tension (at $>4sigma$) between the concordance cosmological model and the Hubble diagram at $z>1.5$. Such a tension is dominated by the contribution of quasars at $z>2$ and starts to be present also in the few supernovae Ia observed at $z>1$.



rate research

Read More

Cosmography becomes non-predictive when cosmic data span beyond the red shift limit $zsimeq1 $. This leads to a emph{strong convergence issue} that jeopardizes its viability. In this work, we critically compare the two main solutions of the convergence problem, i.e. the $y$-parametrizations of the redshift and the alternatives to Taylor expansions based on Pade series. In particular, among several possibilities, we consider two widely adopted parametrizations, namely $y_1=1-a$ and $y_2=arctan(a^{-1}-1)$, being $a$ the scale factor of the Universe. We find that the $y_2$-parametrization performs relatively better than the $y_1$-parametrization over the whole redshift domain. Even though $y_2$ overcomes the issues of $y_1$, we get that the most viable approximations of the luminosity distance $d_L(z)$ are given in terms of Pade approximations. In order to check this result by means of cosmic data, we analyze the Pade approximations up to the fifth order, and compare these series with the corresponding $y$-variables of the same orders. We investigate two distinct domains involving Monte Carlo analysis on the Pantheon Superovae Ia data, $H(z)$ and shift parameter measurements. We conclude that the (2,1) Pade approximation is statistically the optimal approach to explain low and high-redshift data, together with the fifth-order $y_2$-parametrization. At high redshifts, the (3,2) Pade approximation cannot be fully excluded, while the (2,2) Pade one is essentially ruled out.
Einstein Telescope (ET) is a 3rd generation gravitational-wave (GW) detector that is currently undergoing a design study. ET can detect millions of compact binary mergers up to redshifts 2-8. A small fraction of mergers might be observed in coincidence as gamma-ray bursts, helping to measure both the luminosity distance and red-shift to the source. By fitting these measured values to a cosmological model, it should be possible to accurately infer the dark energy equation-of-state, dark matter and dark energy density parameters. ET could, therefore, herald a new era in cosmology.
A method to set constraints on the parameters of extended theories of gravitation is presented. It is based on the comparison of two series expansions of any observable that depends on H(z). The first expansion is of the cosmographical type, while the second uses the dependence of H with z furnished by a given type of extended theory. When applied to f(R) theories together with the redshift drift, the method yields limits on the parameters of two examples (the theory of Hu and Sawicki (2007), and the exponential gravity introduced by Linder (2009)) that are compatible with or more stringent than the existing ones, as well as a limit for a previously unconstrained parameter.
Using a new sub-sample of observed strong gravitational lens systems, for the first time, we present the equation for the angular diameter distance in the $y$-redshift scenario for cosmography and use it to test the cosmographic parameters. In addition, we also use the observational Hubble data from cosmic chronometers and a Joint analysis of both data is performed. Among the most important conclusions are that this new analysis for cosmography using Strong Lensing Systems is equally competitive to constrain the cosmographic parameters as others presented in literature. Additionally, we present the reconstruction of the effective equation of state inferred from our samples, showing that at $z=0$ those reconstructions from Strong Lensing Systems and Joint analysis are in concordance with the standard model of cosmology.
Gamma-ray bursts (GRBs) detected at high redshift can be used to trace the cosmic expansion history. However, the calibration of their luminosity distances is not an easy task in comparison to Type Ia Supernovae (SNeIa). To calibrate these data, correlations between their luminosity and other observed properties of GRBs need to be identified, and we must consider the validity of our assumptions about these correlations over their entire observed redshift range. In this work, we propose a new method to calibrate GRBs as cosmological distance indicators using SNeIa observations with a completely model-independent deep learning architecture. An overview of this machine learning technique was developed in [1] to study the evolution of dark energy models at high redshift. The aim of the method developed in this work is to combine two networks: a Recurrent Neural Network (RNN) and a Bayesian Neural Network (BNN). Using this computational approach, denoted RNN+BNN, we extend the networks efficacy by adding the computation of covariance matrices to the Bayesian process. Once this is done, the SNeIa distance-redshift relation can be tested on the full GRB sample and therefore used to implement a cosmographic reconstruction of the distance-redshift relation in different regimes. Thus, our newly-trained neural network is used to constrain the parameters describing the kinematical state of the Universe via a cosmographic approach at high redshifts (up to $zapprox 10$), wherein we require a very minimal set of assumptions that do not rely on dynamical equations for any specific theory of gravity.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا