ترغب بنشر مسار تعليمي؟ اضغط هنا

Statistical Tests for CHDM and LambdaCDM Cosmologies

127   0   0.0 ( 0 )
 نشر من قبل Sebastiano Ghigna
 تاريخ النشر 1996
  مجال البحث فيزياء
والبحث باللغة English
 تأليف S. Ghigna




اسأل ChatGPT حول البحث

We apply several statistical estimators to high-resolution N-body simulations of two currently viable cosmological models: a mixed dark matter model, having $Omega_ u=0.2$ contributed by two massive neutrinos (C+2 uDM), and a Cold Dark Matter model with Cosmological Constant (LambdaCDM) with $Omega_0=0.3$ and h=0.7. Our aim is to compare simulated galaxy samples with the Perseus-Pisces redshift survey (PPS). We consider the n-point correlation functions (n=2-4), the N-count probability functions P_N, including the void probability function P_0, and the underdensity probability function U_epsilon (where epsilon fixes the underdensity threshold in percentage of the average). We find that P_0 (for which PPS and CfA2 data agree) and P_1 distinguish efficiently between the models, while U_epsilon is only marginally discriminatory. On the contrary, the reduced skewness and kurtosis are, respectively, S_3simeq 2.2 and S_4simeq 6-7 in all cases, quite independent of the scale, in agreement with hierarchical scaling predictions and estimates based on redshift surveys. Among our results, we emphasize the remarkable agreement between PPS data and C+2 uDM in all the tests performed. In contrast, the above LambdaCDM model has serious difficulties in reproducing observational data if galaxies and matter overdensities are related in a simple way.

قيم البحث

اقرأ أيضاً

LEcuyer & Simards Big Crush statistical test suite has revealed statistical flaws in many popular random number generators including Marsaglias Xorshift generators. Vigna recently proposed some 64-bit variations on the Xorshift scheme that are furthe r scrambled (i.e., Xorshift1024*, Xorshift1024+, Xorshift128+, Xoroshiro128+). Unlike their unscrambled counterparts, they pass Big Crush when interleaving blocks of 32 bits for each 64-bit word (most significant, least significant, most significant, least significant, etc.). We report that these scrambled generators systematically fail Big Crush---specifically the linear-complexity and matrix-rank tests that detect linearity---when taking the 32 lowest-order bits in reverse order from each 64-bit word.
Bland and Altman plot method is a graphical plot approach that compares related data sets, supporting the eventual replacement of a measurement method for another one. Perhaps due to its graphical easy output it had been widely applied, however often misinterpreted. We provide three nested tests: accuracy, precision and agreement, as a means to reach statistical support for the equivalence of measurements. These were based on structural regressions added to the method converting it on inferential statistical criteria, verifying mean equality (accuracy), homoscedasticity (precision), and concordance with a bisector line (agreement). A graphical output illustrating these three tests were added to follow Bland and Altmans principles. Five pairs of data sets from previously published articles that applied the Bland and Altmans principles illustrate this statistical approach. In one case it was demonstrated strict equivalence, three cases showed partial equivalence, and there was one case without equivalence. Here we show a statistical approach added to the graphical outputs that turns the Bland-Altman otherwise graphical subjective interpretation into a clear and objective result and with significance value for a reliable and better communicable decision.
Variation of the speed of light is quite a debated issue in cosmology with some benefits, but also with some controversial concerns. Many approaches to develop a consistent varying speed of light (VSL) theory have been developed recently. Although a lot of theoretical debate has sprout out about their feasibility and reliability, the most obvious and straightforward way to discriminate and check if such theories are really workable has been missed out or not fully employed. What is meant here is the comparison of these theories with observational data in a fully comprehensive way. In this paper we try to address this point i.e., by using the most updated cosmological probes, we test three different candidates for a VSL theory (Barrow & Magueijo, Avelino & Martins, and Moffat) signal. We consider many different ans{a}tze for both the functional form of $c(z)$ (which cannot be fixed by theoretical motivations) and for the dark energy dynamics, in order to have a clear global picture from which we extract the results. We compare these results using a reliable statistical tool such as the Bayesian Evidence. We find that the present cosmological data is perfectly compatible with any of these VSL scenarios, but in one case (Moffat model) we have a higher Bayesian Evidence ratio in favour of VSL than in the standard $c=$ constant $Lambda$CDM scenario. Moreover, in such a scenario, the VSL signal can help to strengthen constraints on the spatial curvature (with indication toward an open universe), to clarify some properties of dark energy (exclusion of a cosmological constant at $2sigma$ level) and is also falsifiable in the nearest future due to some peculiar issues which differentiate this model from the standard model. Finally, we have applied some priors which come from cosmology and, in particular, from information theory and gravitational thermodynamics.
Nonparametric statistical tests are useful procedures that can be applied in a wide range of situations, such as testing randomness or goodness of fit, one-sample, two-sample and multiple-sample analysis, association between bivariate samples or coun t data analysis. Their use is often preferred to parametric tests due to the fact that they require less restrictive assumptions about the population sampled. In this work, JavaNPST, an open source Java library implementing 40 nonparametric statistical tests, is presented. It can be helpful for programmers and practitioners interested in performing nonparametric statistical analyses, providing a quick and easy way of running these tests directly within any Java code. Some examples of use are also shown, highlighting some of the more remarkable capabilities of the library.
In this work, we investigate Newtonian cosmologies with a time-varying gravitational constant, $G(t)$. We examine whether such models can reproduce the low-redshift cosmological observations without a cosmological constant, or any other sort of expli cit dark energy fluid. Starting with a modified Newtons second law, where $G$ is taken as a function of time, we derive the first Friedmann--Lema{^i}tre equation, where a second parameter, $G^*$, appears as the gravitational constant. This parameter is related to the original $G$ from the second law, which remains in the acceleration equation. We use this approach to reproduce various cosmological scenarios that are studied in the literature, and we test these models with low-redshift probes: type-Ia supernovae (SNIa), baryon acoustic oscillations, and cosmic chronometers, taking also into account a possible change in the supernovae intrinsic luminosity with redshift. As a result, we obtain several models with similar $chi^2$ values as the standard $Lambda$CDM cosmology. When we allow for a redshift-dependence of the SNIa intrinsic luminosity, a model with a $G$ exponentially decreasing to zero while remaining positive (model 4) can explain the observations without acceleration. When we assume no redshift-dependence of SNIa, the observations favour a negative $G$ at large scales, while $G^*$ remains positive for most of these models. We conclude that these models offer interesting interpretations to the low-redshift cosmological observations, without needing a dark energy term.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا