ترغب بنشر مسار تعليمي؟ اضغط هنا

CARPool: fast, accurate computation of large-scale structure statistics by pairing costly and cheap cosmological simulations

86   0   0.0 ( 0 )
 نشر من قبل Nicolas Chartier
 تاريخ النشر 2020
  مجال البحث فيزياء
والبحث باللغة English




اسأل ChatGPT حول البحث

To exploit the power of next-generation large-scale structure surveys, ensembles of numerical simulations are necessary to give accurate theoretical predictions of the statistics of observables. High-fidelity simulations come at a towering computational cost. Therefore, approximate but fast simulations, surrogates, are widely used to gain speed at the price of introducing model error. We propose a general method that exploits the correlation between simulations and surrogates to compute fast, reduced-variance statistics of large-scale structure observables without model error at the cost of only a few simulations. We call this approach Convergence Acceleration by Regression and Pooling (CARPool). In numerical experiments with intentionally minimal tuning, we apply CARPool to a handful of GADGET-III $N$-body simulations paired with surrogates computed using COmoving Lagrangian Acceleration (COLA). We find $sim 100$-fold variance reduction even in the non-linear regime, up to $k_mathrm{max} approx 1.2$ $h {rm Mpc^{-1}}$ for the matter power spectrum. CARPool realises similar improvements for the matter bispectrum. In the nearly linear regime CARPool attains far larger sample variance reductions. By comparing to the 15,000 simulations from the Quijote suite, we verify that the CARPool estimates are unbiased, as guaranteed by construction, even though the surrogate misses the simulation truth by up to $60%$ at high $k$. Furthermore, even with a fully configuration-space statistic like the non-linear matter density probability density function, CARPool achieves unbiased variance reduction factors of up to $sim 10$, without any further tuning. Conversely, CARPool can be used to remove model error from ensembles of fast surrogates by combining them with a few high-accuracy simulations.



قيم البحث

اقرأ أيضاً

The covariance matrix $boldsymbol{Sigma}$ of non-linear clustering statistics that are measured in current and upcoming surveys is of fundamental interest for comparing cosmological theory and data and a crucial ingredient for the likelihood approxim ations underlying widely used parameter inference and forecasting methods. The extreme number of simulations needed to estimate $boldsymbol{Sigma}$ to sufficient accuracy poses a severe challenge. Approximating $boldsymbol{Sigma}$ using inexpensive but biased surrogates introduces model error with respect to full simulations, especially in the non-linear regime of structure growth. To address this problem we develop a matrix generalization of Convergence Acceleration by Regression and Pooling (CARPool) to combine a small number of simulations with fast surrogates and obtain low-noise estimates of $boldsymbol{Sigma}$ that are unbiased by construction. Our numerical examples use CARPool to combine GADGET-III $N$-body simulations with fast surrogates computed using COmoving Lagrangian Acceleration (COLA). Even at the challenging redshift $z=0.5$, we find variance reductions of at least $mathcal{O}(10^1)$ and up to $mathcal{O}(10^4)$ for the elements of the matter power spectrum covariance matrix on scales $8.9times 10^{-3}<k_mathrm{max} <1.0$ $h {rm Mpc^{-1}}$. We demonstrate comparable performance for the covariance of the matter bispectrum, the matter correlation function and probability density function of the matter density field. We compare eigenvalues, likelihoods, and Fisher matrices computed using the CARPool covariance estimate with the standard sample covariance estimators and generally find considerable improvement except in cases where $Sigma$ is severely ill-conditioned.
Constraining neutrino mass remains an elusive challenge in modern physics. Precision measurements are expected from several upcoming cosmological probes of large-scale structure. Achieving this goal relies on an equal level of precision from theoreti cal predictions of neutrino clustering. Numerical simulations of the non-linear evolution of cold dark matter and neutrinos play a pivotal role in this process. We incorporate neutrinos into the cosmological N-body code CUBEP3M and discuss the challenges associated with pushing to the extreme scales demanded by the neutrino problem. We highlight code optimizations made to exploit modern high performance computing architectures and present a novel method of data compression that reduces the phase-space particle footprint from 24 bytes in single precision to roughly 9 bytes. We scale the neutrino problem to the Tianhe-2 supercomputer and provide details of our production run, named TianNu, which uses 86% of the machine (13,824 compute nodes). With a total of 2.97 trillion particles, TianNu is currently the worlds largest cosmological N-body simulation and improves upon previous neutrino simulations by two orders of magnitude in scale. We finish with a discussion of the unanticipated computational challenges that were encountered during the TianNu runtime.
The standard model of cosmology, {Lambda}CDM, is the simplest model that matches the current observations, but it relies on two hypothetical components, to wit, dark matter and dark energy. Future galaxy surveys and cosmic microwave background (CMB) experiments will independently shed light on these components, but a joint analysis that includes cross-correlations will be necessary to extract as much information as possible from the observations. In this paper, we carry out a multi-probe analysis based on pseudo-spectra and test it on publicly available data sets. We use CMB temperature anisotropies and CMB lensing observations from Planck as well as the spectroscopic galaxy and quasar samples of SDSS-III/BOSS, taking advantage of the large areas covered by these surveys. We build a likelihood to simultaneously analyse the auto and cross spectra of CMB lensing and tracer overdensity maps before running Monte-Carlo Markov Chains (MCMC) to assess the constraining power of the combined analysis. We then add the CMB temperature anisotropies likelihood and obtain constraints on cosmological parameters ($H_0$, $omega_b$, $omega_c$, ${ln10^{10}A_s}$, $n_s$ and $z_{re}$) and galaxy biases. We demonstrate that the joint analysis can additionally constrain the total mass of neutrinos ${Sigma m_{ u}}$ as well as the dark energy equation of state $w$ at once (for a total of eight cosmological parameters), which is impossible with either of the data sets considered separately. Finally, we discuss limitations of the analysis related to, e.g., the theoretical precision of the models, particularly in the non-linear regime.
We show how the non-linearity of general relativity generates a characteristic non-Gaussian signal in cosmological large-scale structure that we calculate at all perturbative orders in a large scale limit. Newtonian gravity and general relativity pro vide complementary theoretical frameworks for modelling large-scale structure in $Lambda$CDM cosmology; a relativistic approach is essential to determine initial conditions which can then be used in Newtonian simulations studying the non-linear evolution of the matter density. Most inflationary models in the very early universe predict an almost Gaussian distribution for the primordial metric perturbation, $zeta$. However, we argue that it is the Ricci curvature of comoving-orthogonal spatial hypersurfaces, $R$, that drives structure formation at large scales. We show how the non-linear relation between the spatial curvature, $R$, and the metric perturbation, $zeta$, translates into a specific non-Gaussian contribution to the initial comoving matter density that we calculate for the simple case of an initially Gaussian $zeta$. Our analysis shows the non-linear signature of Einsteins gravity in large-scale structure.
We present the 2-point function from Fast and Accurate Spherical Bessel Transformation (2-FAST) algorithm for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum $P(k)$ onto the configuration space, $xi_ell^ u(r)$, or spherical harmonic space, $C_ell(chi,chi)$. First, we employ the FFTlog transformation of the power spectrum to divide the calculation into $P(k)$-dependent coefficients and $P(k)$-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا