ترغب بنشر مسار تعليمي؟ اضغط هنا

Preliminary Application of Reinsch Splines to Cosmology: Transition Redshift Determination with Simulated OHD

45   0   0.0 ( 0 )
 نشر من قبل Dezi Liu
 تاريخ النشر 2012
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

Many schemes have been proposed to perform a model-independent constraint on cosmological dynamics, such as nonparametric dark energy equation of state (EoS) omega(z) or the deceleration parameter q(z). These methods usually contain derivative processes with respect to observational data with noise. However, it still remains remarkably uncertain when one estimates the numerical differentiation, especially the corresponding truncation errors. In this work, we introduce a global numerical differentiation method, first formulated by Reinsch(1967), which is smoothed by cubic spline functions. The optimal solution is obtained by minimizing the functional Phi(f). To investigate the potential of the algorithm further, we apply it to the estimation of the transition redshift z_{t} with simulated expansion rate E(z) based on observational Hubble parameter data(OHD). An effective method to determine the free parameter S appearing in Reinsch Splines is provided.

قيم البحث

اقرأ أيضاً

We investigate the possibility of performing cosmological studies in the redshift range $2.5<z<5$ through suitable extensions of existing and upcoming radio-telescopes like CHIME, HIRAX and FAST. We use the Fisher matrix technique to forecast the bou nds that those instruments can place on the growth rate, the BAO distance scale parameters, the sum of the neutrino masses and the number of relativistic degrees of freedom at decoupling, $N_{rm eff}$. We point out that quantities that depend on the amplitude of the 21cm power spectrum, like $fsigma_8$, are completely degenerate with $Omega_{rm HI}$ and $b_{rm HI}$, and propose several strategies to independently constraint them through cross-correlations with other probes. Assuming $5%$ priors on $Omega_{rm HI}$ and $b_{rm HI}$, $k_{rm max}=0.2~h{rm Mpc}^{-1}$ and the primary beam wedge, we find that a HIRAX extension can constrain, within bins of $Delta z=0.1$: 1) the value of $fsigma_8$ at $simeq4%$, 2) the value of $D_A$ and $H$ at $simeq1%$. In combination with data from Euclid-like galaxy surveys and CMB S4, the sum of the neutrino masses can be constrained with an error equal to $23$ meV ($1sigma$), while $N_{rm eff}$ can be constrained within 0.02 ($1sigma$). We derive similar constraints for the extensions of the other instruments. We study in detail the dependence of our results on the instrument, amplitude of the HI bias, the foreground wedge coverage, the nonlinear scale used in the analysis, uncertainties in the theoretical modeling and the priors on $b_{rm HI}$ and $Omega_{rm HI}$. We conclude that 21cm intensity mapping surveys operating in this redshift range can provide extremely competitive constraints on key cosmological parameters.
193 - J. R. Allison 2011
The large spectral bandwidth and wide field of view of the Australian SKA Pathfinder radio telescope will open up a completely new parameter space for large extragalactic HI surveys. Here we focus on identifying and parametrising HI absorption lines which occur in the line of sight towards strong radio continuum sources. We have developed a method for simultaneously finding and fitting HI absorption lines in radio data by using multi-nested sampling, a Bayesian Monte Carlo algorithm. The method is tested on a simulated ASKAP data cube, and is shown to be reliable at detecting absorption lines in low signal-to-noise data without the need to smooth or alter the data. Estimation of the local Bayesian evidence statistic provides a quantitative criterion for assigning significance to a detection and selecting between competing analytical line-profile models.
Cosmic voids in the large-scale structure of the Universe affect the peculiar motions of objects in their vicinity. Although these motions are difficult to observe directly, the clustering pattern of their surrounding tracers in redshift space is inf luenced in a unique way. This allows to investigate the interplay between densities and velocities around voids, which is solely dictated by the laws of gravity. With the help of $N$-body simulations and derived mock-galaxy catalogs we calculate the average density fluctuations around voids identified with a watershed algorithm in redshift space and compare the results with the expectation from general relativity and the $Lambda$CDM model. We find linear theory to work remarkably well in describing the dynamics of voids. Adopting a Bayesian inference framework, we explore the full posterior of our model parameters and forecast the achievable accuracy on measurements of the growth rate of structure and the geometric distortion through the Alcock-Paczynski effect. Systematic errors in the latter are reduced from $sim15%$ to $sim5%$ when peculiar velocities are taken into account. The relative parameter uncertainties in galaxy surveys with number densities comparable to the SDSS MAIN (CMASS) sample probing a volume of $1h^{-3}{rm Gpc}^3$ yield $sigma_{f/b}left/(f/b)right.sim2%$ ($20%$) and $sigma_{D_AH}/D_AHsim0.2%$ ($2%$), respectively. At this level of precision the linear-theory model becomes systematics dominated, with parameter biases that fall beyond these values. Nevertheless, the presented method is highly model independent; its viability lies in the underlying assumption of statistical isotropy of the Universe.
We address the problem of separating stars from galaxies in future large photometric surveys. We focus our analysis on simulations of the Dark Energy Survey (DES). In the first part of the paper, we derive the science requirements on star/galaxy sepa ration, for measurement of the cosmological parameters with the Gravitational Weak Lensing and Large Scale Structure probes. These requirements are dictated by the need to control both the statistical and systematic errors on the cosmological parameters, and by Point Spread Function calibration. We formulate the requirements in terms of the completeness and purity provided by a given star/galaxy classifier. In order to achieve these requirements at faint magnitudes, we propose a new method for star/galaxy separation in the second part of the paper. We first use Principal Component Analysis to outline the correlations between the objects parameters and extract from it the most relevant information. We then use the reduced set of parameters as input to an Artificial Neural Network. This multi-parameter approach improves upon purely morphometric classifiers (such as the classifier implemented in SExtractor), especially at faint magnitudes: it increases the purity by up to 20% for stars and by up to 12% for galaxies, at i-magnitude fainter than 23.
We use a suite of N-body simulations that incorporate massive neutrinos as an extra-set of particles to investigate their effect on the halo mass function. We show that for cosmologies with massive neutrinos the mass function of dark matter haloes se lected using the spherical overdensity (SO) criterion is well reproduced by the fitting formula of Tinker et al. (2008) once the cold dark matter power spectrum is considered instead of the total matter power, as it is usually done. The differences between the two implementations, i.e. using $P_{rm cdm}(k)$ instead of $P_{rm m}(k)$, are more pronounced for large values of the neutrino masses and in the high end of the halo mass function: in particular, the number of massive haloes is higher when $P_{rm cdm}(k)$ is considered rather than $P_{rm m}(k)$. As a quantitative application of our findings we consider a Planck-like SZ-clusters survey and show that the differences in predicted number counts can be as large as $30%$ for $sum m_ u = 0.4$ eV. Finally, we use the Planck-SZ clusters sample, with an approximate likelihood calculation, to derive Planck-like constraints on cosmological parameters. We find that, in a massive neutrino cosmology, our correction to the halo mass function produces a shift in the $sigma_8(Omega_{rm m}/0.27)^gamma$ relation which can be quantified as $Delta gamma sim 0.05$ and $Delta gamma sim 0.14$ assuming one ($N_ u=1$) or three ($N_ u=3$) degenerate massive neutrino, respectively. The shift results in a lower mean value of $sigma_8$ with $Delta sigma_8 = 0.01$ for $N_ u=1$ and $Delta sigma_8 = 0.02$ for $N_ u=3$, respectively. Such difference, in a cosmology with massive neutrinos, would increase the tension between cluster abundance and Planck CMB measurements.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا