Do you want to publish a course? Click here

Cosmological Constraints from the Anisotropic Clustering Analysis using BOSS DR9

130   0   0.0 ( 0 )
 Added by Yong-Seon Song
 Publication date 2013
  fields Physics
and research's language is English




Ask ChatGPT about the research

Our observations of the Universe are fundamentally anisotropic, with data from galaxies separated transverse to the line of sight coming from the same epoch while that from galaxies separated parallel to the line of sight coming from different times. Moreover, galaxy velocities along the line of sight change their redshift, giving redshift space distortions. We perform a full two-dimensional anisotropy analysis of galaxy clustering data, fitting in a substantially model independent manner the angular diameter distance D_A, Hubble parameter H, and growth rate ddelta/dln a without assuming a dark energy model. The results demonstrate consistency with LCDM expansion and growth, hence also testing general relativity. We also point out the interpretation dependence of the effective redshift z_eff, and its cosmological impact for next generation surveys.



rate research

Read More

We analyse the clustering of cosmic large scale structure using a consistent modified gravity perturbation theory, accounting for anisotropic effects along and transverse to the line of sight. The growth factor has a particular scale dependence in f(R) gravity and we fit for the shape parameter f_{R0} simultaneously with the distance and the large scale (general relativity) limit of the growth function. Using more than 690,000 galaxies in the Baryon Oscillation Spectroscopy Survey Data Release 11, we find no evidence for extra scale dependence, with the 95% confidence upper limit |f_{R0}| <8 times 10^{-4}. Future clustering data, such as from the Dark Energy Spectroscopic Instrument, can use this consistent methodology to impose tighter constraints.
We analyze the clustering of large scale structure in the Universe in a model independent method, accounting for anisotropic effects along and transverse to the line of sight. The Baryon Oscillation Spectroscopy Survey Data Release 11 provides a large sample of 690,000 galaxies, allowing determination of the Hubble expansion H, angular distance D_A, and growth rate G_T at an effective redshift of z=0.57. After careful bias and convergence studies of the effects from small scale clustering, we find that cutting transverse separations below 40 Mpc/h delivers robust results while smaller scale data leads to a bias due to unmodelled nonlinear and velocity effects. The converged results are in agreement with concordance LCDM cosmology, general relativity, and minimal neutrino mass, all within the 68% confidence level. We also present results separately for the northern and southern hemisphere sky, finding a slight tension in the growth rate -- potentially a signature of anisotropic stress, or just covariance with small scale velocities -- but within 68% CL.
The Ly$alpha$ forest transmission probability distribution function (PDF) is an established probe of the intergalactic medium (IGM) astrophysics, especially the temperature-density relationship of the IGM. We measure the transmission PDF from 3393 Baryon Oscillations Spectroscopic Survey (BOSS) quasars from SDSS Data Release 9, and compare with mock spectra that include careful modeling of the noise, continuum, and astrophysical uncertainties. The BOSS transmission PDFs, measured at $langle z rangle = [2.3,2.6,3.0]$, are compared with PDFs created from mock spectra drawn from a suite of hydrodynamical simulations that sample the IGM temperature-density relationship, $gamma$, and temperature at mean-density, $T_0$, where $T(Delta) = T_0 Delta^{gamma-1}$. We find that a significant population of partial Lyman-limit systems with a column-density distribution slope of $beta_mathrm{pLLS} sim -2$ are required to explain the data at the low-transmission end of transmission PDF, while uncertainties in the mean Ly$alpha$ forest transmission affect the high-transmission end. After modelling the LLSs and marginalizing over mean-transmission uncertainties, we find that $gamma=1.6$ best describes the data over our entire redshift range, although constraints on $T_0$ are affected by systematic uncertainties. Within our model framework, isothermal or inverted temperature-density relationships ($gamma leq 1$) are disfavored at a significance of over 4$sigma$, although this could be somewhat weakened by cosmological and astrophysical uncertainties that we did not model.
We use analytic covariance matrices to carry out a full-shape analysis of the galaxy power spectrum multipoles from the Baryon Oscillation Spectroscopic Survey (BOSS). We obtain parameter estimates that agree well with those based on the sample covariance from two thousand galaxy mock catalogs, thus validating the analytic approach and providing substantial reduction in computational cost. We also highlight a number of additional advantages of analytic covariances. First, the analysis does not suffer from sampling noise, which biases the constraints and typically requires inflating parameter error bars. Second, it allows us to study convergence of the cosmological constraints when recomputing the analytic covariances to match the best-fit power spectrum, which can be done at a negligible computational cost, unlike when using mock catalogs. These effects reduce the systematic error budget of cosmological constraints, which suggests that the analytic approach may be an important tool for upcoming high-precision galaxy redshift surveys such as DESI and Euclid. Finally, we study the impact of various ingredients in the power spectrum covariance matrix and show that the non-Gaussian part, which includes the regular trispectrum and super-sample covariance, has a marginal effect ($lesssim 10 %$) on the cosmological parameter error bars. We also suggest improvements to analytic covariances that are commonly used in Fisher forecasts.
We apply two compression methods to the galaxy power spectrum monopole/quadrupole and bispectrum monopole measurements from the BOSS DR12 CMASS sample. Both methods reduce the dimension of the original data-vector to the number of cosmological parameters considered, using the Karhunen-Lo`eve algorithm with an analytic covariance model. In the first case, we infer the posterior through MCMC sampling from the likelihood of the compressed data-vector (MC-KL). The second, faster option, works by first Gaussianising and then orthogonalising the parameter space before the compression; in this option (G-PCA) we only need to run a low-resolution preliminary MCMC sample for the Gaussianization to compute our posterior. Both compression methods accurately reproduce the posterior distributions obtained by standard MCMC sampling on the CMASS dataset for a $k$-space range of $0.03-0.12,h/mathrm{Mpc}$. The compression enables us to increase the number of bispectrum measurements by a factor of $sim 23$ over the standard binning (from 116 to 2734 triangles used), which is otherwise limited by the number of mock catalogues available. This reduces the $68%$ credible intervals for the parameters $left(b_1,b_2,f,sigma_8right)$ by $left(-24.8%,-52.8%,-26.4%,-21%right)$, respectively. The best-fit values we obtain are $(b_1=2.31pm0.17,b_2=0.77pm0.19,$ $f(z_{mathrm{CMASS}})=0.67pm0.06,sigma_8(z_{mathrm{CMASS}})=0.51pm0.03)$. Using these methods for future redshift surveys like DESI, Euclid and PFS will drastically reduce the number of simulations needed to compute accurate covariance matrices and will facilitate tighter constraints on cosmological parameters.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا