Do you want to publish a course? Click here

Cluster counts. II. Tensions, massive neutrinos, and modified gravity

101   0   0.0 ( 0 )
 Added by St\\'ephane Ilic
 Publication date 2019
  fields Physics
and research's language is English




Ask ChatGPT about the research

The $Lambda$CDM concordance model is very successful at describing our Universe with high accuracy and few parameters. Despite its successes, a few tensions persist; most notably, the best-fit $Lambda$CDM model, as derived from the Planck CMB data, largely overpredicts the abundance of SZ clusters when using their standard mass calibration. Whether this is a sign of an incorrect calibration or the need for new physics remains a matter of debate. Here we examined two simple extensions of the standard model and their ability to release this tension: massive neutrinos and a simple modified gravity model via a non-standard growth index $gamma$. We used both the Planck CMB and SZ cluster counts as datasets, with or without local X-ray clusters. In the case of massive neutrinos, the SZ calibration $(1-b)$ is constrained to $0.59^{+0.03}_{-0.04}$ (68%), more than 5$sigma$ away from its standard value $sim0.8$. We found little correlation between $sum m_ u$ and $(1-b)$, corroborating previous conclusions derived from X-ray clusters; massive neutrinos do not alleviate the cluster-CMB tension. With our simple $gamma$ model, we found a large correlation between calibration and growth index but contrary to local X-ray clusters, SZ clusters are able to break the degeneracy between the two thanks to their extended $z$ range. The calibration $(1-b)$ was then constrained to $0.60^{+0.05}_{-0.07}$, leading to an interesting constraint on $gamma=0.60pm 0.13$. When both massive neutrinos and modified gravity were allowed, preferred values remained centred on standard $Lambda$CDM values, but $(1-b)sim0.8$ was allowed (though only at the $2sigma$ level) provided $sum m_ usim0.34 $ eV and $gammasim0.8$. We conclude that massive neutrinos do not relieve the cluster-CMB tension and that a calibration close to the standard value $0.8$ would call for new physics in the gravitational sector.



rate research

Read More

Cosmic voids are progressively emerging as a new viable cosmological probe. Their abundance and density profiles are sensitive to modifications of gravity, as well as to dark energy and neutrinos. The main goal of this work is to investigate the possibility of exploiting cosmic void statistics to disentangle the degeneracies resulting from a proper combination of $f(R)$ modified gravity and neutrino mass. We use N-body simulations to analyse the density profiles and size function of voids traced by both dark matter particles and haloes. We find clear evidence of the enhancement of gravity in $f(R)$ cosmologies in the void density profiles at $z=1$. However, these effects can be almost completely overridden by the presence of massive neutrinos because of their thermal free-streaming. Despite the limited volume of the analysed simulations does not allow us to achieve a statistically relevant abundance of voids larger than $40 mathrm{Mpc}/h$, we find that the void size function at high redshifts and for large voids is potentially an effective probe to disentangle these degenerate cosmological models, which is key in the prospective of the upcoming wide field redshift surveys.
Modified gravity and massive neutrino cosmologies are two of the most interesting scenarios that have been recently explored to account for possible observational deviations from the concordance $Lambda$-cold dark matter ($Lambda$CDM) model. In this context, we investigated the large-scale structure of the Universe by exploiting the dustp simulations that implement, simultaneously, the effects of $f(R)$ gravity and massive neutrinos. To study the possibility of breaking the degeneracy between these two effects, we analysed the redshift-space distortions in the clustering of dark matter haloes at different redshifts. Specifically, we focused on the monopole and quadrupole of the two-point correlation function, both in real and redshift space. The deviations with respect to $Lambda$CDM model have been quantified in terms of the linear growth rate parameter. We found that redshift-space distortions provide a powerful probe to discriminate between $Lambda$CDM and modified gravity models, especially at high redshifts ($z gtrsim 1$), even in the presence of massive neutrinos.
We consistently include the effect of massive neutrinos in the thermal Sunyaev Zeldovich (SZ) power spectrum and cluster counts analyses, highlighting subtle dependencies on the total neutrino mass and data combination. In particular, we find that using the transfer functions for Cold Dark Matter (CDM) + baryons in the computation of the halo mass function, instead of the transfer functions including neutrino perturbations, as prescribed in recent work, yields a $approx$ 0.25% downward shift of the $sigma_8$ constraint from tSZ power spectrum data, with a fiducial neutrino mass $Sigma m_ u=0.06$ eV. In $Lambda$CDM, with an X-ray mass bias corresponding to the expected hydrostatic mass bias, i.e., $(1-b)simeq0.8$, our constraints from Planck SZ data are consistent with the latest results from SPT, DES-Y1 and KiDS+VIKING-450. In $ uLambda$CDM, our joint analyses of Planck SZ with Planck 2015 primary CMB yield a small improvement on the total neutrino mass bound compared to the Planck 2015 primary CMB constraint, as well as $(1-b)=0.64pm0.04$~(68%~CL). For forecasts, we find that competitive neutrino mass measurements using cosmic variance limited SZ power spectrum require masking the heaviest clusters and probing the small-scale SZ power spectrum up to $ell_mathrm{max}approx10^4$. Although this is challenging, we find that SZ power spectrum can realistically be used to tightly constrain intra-cluster medium properties: we forecast a 2% determination of the X-ray mass bias by combining CMB-S4 and our mock SZ power spectrum with $ell_mathrm{max}=10^3$.
137 - Carlo Giocoli 2018
We present a novel suite of cosmological N-body simulations called the DUSTGRAIN-pathfinder, implementing simultaneously the effects of an extension to General Relativity in the form of $f(R)$ gravity and of a non-negligible fraction of massive neutrinos. We describe the generation of simulated weak lensing and cluster counts observables within a past light-cone extracted from these simulations. The simulations have been performed by means of a combination of the MG-GADGET code and a particle-based implementation of massive neutrinos, while the light-cones have been generated using the MapSim pipeline allowing us to compute weak lensing maps through a ray-tracing algorithm for different values of the source plane redshift. The mock observables extracted from our simulations will be employed for a series of papers focussed on understanding and possibly breaking the well-known observational degeneracy between $f(R)$ gravity and massive neutrinos, i.e. the fact that some specific combinations of the characteristic parameters for these two phenomena (the $f_{R0}$ scalar amplitude and the total neutrino mass $Sigma m_{ u}$) may result indistinguishable from the standard $mathrm{Lambda CDM}$ cosmology through several standard observational probes. In particular, in the present work we show how a tomographic approach to weak lensing statistics could allow - especially for the next generation of wide-field surveys - to disentangle some of the models that appear statistically indistinguishable through standard single-redshift weak lensing probe.
In a recent work, Baldi et al. highlighted the issue of cosmic degeneracies, consisting in the fact that the standard statistics of the large-scale structure might not be sufficient to conclusively test cosmological models beyond $Lambda $CDM when multiple extensions of the standard scenario coexist in nature. In particular, it was shown that the characteristic features of an $f(R)$ Modified Gravity theory and of massive neutrinos with an appreciable total mass $Sigma _{i}m_{ u _{i}}$ are suppressed in most of the basic large-scale structure observables for a specific combination of the main parameters of the two non-standard models. In the present work, we explore the possibility that the mean specific size of the supercluster spines -- which was recently proposed as a non-standard statistics by Shim and Lee to probe gravity at large scales -- can help to break this cosmic degeneracy. By analyzing the halo samples from N-body simulations featuring various combinations of $f(R)$ and $Sigma _{i}m_{ u _{i}}$ we find that -- at the present epoch -- the value of $Sigma _{i}m_{ u _{i}}$ required to maximally suppress the effects of $f(R)$ gravity on the specific sizes of the superclusters spines is different from that found for the other standard statistics. Furthermore, it is also shown that at higher redshifts ($zge 0.3$) the deviations of the mean specific sizes of the supercluster spines for all of the four considered combinations from its value for the standard $Lambda$CDM case are statistically significant.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا