No Arabic abstract
We present the 2-point function from Fast and Accurate Spherical Bessel Transformation (2-FAST) algorithm for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum $P(k)$ onto the configuration space, $xi_ell^ u(r)$, or spherical harmonic space, $C_ell(chi,chi)$. First, we employ the FFTlog transformation of the power spectrum to divide the calculation into $P(k)$-dependent coefficients and $P(k)$-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.
The linear canonical transform (LCT) was extended to complex-valued parameters, called complex LCT, to describe the complex amplitude propagation through lossy or lossless optical systems. Bargmann transform is a special case of the complex LCT. In this paper, we normalize the Bargmann transform such that it can be bounded near infinity. We derive the relationships of the normalized Bargmann transform to Gabor transform, Hermite-Gaussian functions, gyrator transform, and 2D nonseparable LCT. Several kinds of fast and accurate computational methods of the normalized Bargmann transform and its inverse are proposed based on these relationships.
The statistical distribution of galaxies is a powerful probe to constrain cosmological models and gravity. In particular the matter power spectrum $P(k)$ brings information about the cosmological distance evolution and the galaxy clustering together. However the building of $P(k)$ from galaxy catalogues needs a cosmological model to convert angles on the sky and redshifts into distances, which leads to difficulties when comparing data with predicted $P(k)$ from other cosmological models, and for photometric surveys like LSST. The angular power spectrum $C_ell(z_1,z_2)$ between two bins located at redshift $z_1$ and $z_2$ contains the same information than the matter power spectrum, is free from any cosmological assumption, but the prediction of $C_ell(z_1,z_2)$ from $P(k)$ is a costly computation when performed exactly. The Angpow software aims at computing quickly and accurately the auto ($z_1=z_2$) and cross ($z_1 eq z_2$) angular power spectra between redshift bins. We describe the developed algorithm, based on developments on the Chebyshev polynomial basis and on the Clenshaw-Curtis quadrature method. We validate the results with other codes, and benchmark the performance. Angpow is flexible and can handle any user defined power spectra, transfer functions, and redshift selection windows. The code is fast enough to be embedded inside programs exploring large cosmological parameter spaces through the $C_ell(z_1,z_2)$ comparison with data. We emphasize that the Limbers approximation, often used to fasten the computation, gives wrong $C_ell$ values for cross-correlations.
We combine Newtons variational method with ideas from eigenvector continuation to construct a fast & accurate emulator for two-body scattering observables. The emulator will facilitate the application of rigorous statistical methods for interactions that depend smoothly on a set of free parameters. Our approach begins with a trial $K$ or $T$ matrix constructed from a small number of exact solutions to the Lippmann--Schwinger equation. Subsequent emulation only requires operations on small matrices. We provide several applications to short-range potentials with and without the Coulomb interaction and partial-wave coupling. It is shown that the emulator can accurately extrapolate far from the support of the training data. When used to emulate the neutron-proton cross section with a modern chiral interaction as a function of 26 free parameters, it reproduces the exact calculation with negligible error and provides an over 300x improvement in CPU time.
To exploit the power of next-generation large-scale structure surveys, ensembles of numerical simulations are necessary to give accurate theoretical predictions of the statistics of observables. High-fidelity simulations come at a towering computational cost. Therefore, approximate but fast simulations, surrogates, are widely used to gain speed at the price of introducing model error. We propose a general method that exploits the correlation between simulations and surrogates to compute fast, reduced-variance statistics of large-scale structure observables without model error at the cost of only a few simulations. We call this approach Convergence Acceleration by Regression and Pooling (CARPool). In numerical experiments with intentionally minimal tuning, we apply CARPool to a handful of GADGET-III $N$-body simulations paired with surrogates computed using COmoving Lagrangian Acceleration (COLA). We find $sim 100$-fold variance reduction even in the non-linear regime, up to $k_mathrm{max} approx 1.2$ $h {rm Mpc^{-1}}$ for the matter power spectrum. CARPool realises similar improvements for the matter bispectrum. In the nearly linear regime CARPool attains far larger sample variance reductions. By comparing to the 15,000 simulations from the Quijote suite, we verify that the CARPool estimates are unbiased, as guaranteed by construction, even though the surrogate misses the simulation truth by up to $60%$ at high $k$. Furthermore, even with a fully configuration-space statistic like the non-linear matter density probability density function, CARPool achieves unbiased variance reduction factors of up to $sim 10$, without any further tuning. Conversely, CARPool can be used to remove model error from ensembles of fast surrogates by combining them with a few high-accuracy simulations.
This work presents a method of computing Voigt functions and their derivatives, to high accuracy, on a uniform grid. It is based on an adaptation of Fourier-transform based convolution. The relative error of the result decreases as the fourth power of the computational effort. Because of its use of highly vectorizable operations for its core, it can be implemented very efficiently in scripting language environments which provide fast vector libraries. The availability of the derivatives makes it suitable as a function generator for non-linear fitting procedures.