No Arabic abstract
This work presents a method of computing Voigt functions and their derivatives, to high accuracy, on a uniform grid. It is based on an adaptation of Fourier-transform based convolution. The relative error of the result decreases as the fourth power of the computational effort. Because of its use of highly vectorizable operations for its core, it can be implemented very efficiently in scripting language environments which provide fast vector libraries. The availability of the derivatives makes it suitable as a function generator for non-linear fitting procedures.
The bottleneck of micromagnetic simulations is the computation of the long-ranged magnetostatic fields. This can be tackled on regular N-node grids with Fast Fourier Transforms in time N logN, whereas the geometrically more versatile finite element methods (FEM) are bounded to N^4/3 in the best case. We report the implementation of a Non-uniform Fast Fourier Transform algorithm which brings a N logN convergence to FEM, with no loss of accuracy in the results.
In this paper we derive an updating scheme for calculating some important network statistics such as degree, clustering coefficient, etc., aiming at reduce the amount of computation needed to track the evolving behavior of large networks; and more importantly, to provide efficient methods for potential use of modeling the evolution of networks. Using the updating scheme, the network statistics can be computed and updated easily and much faster than re-calculating each time for large evolving networks. The update formula can also be used to determine which edge/node will lead to the extremal change of network statistics, providing a way of predicting or designing evolution rule of networks.
We propose a novel method for computing $p$-values based on nested sampling (NS) applied to the sampling space rather than the parameter space of the problem, in contrast to its usage in Bayesian computation. The computational cost of NS scales as $log^2{1/p}$, which compares favorably to the $1/p$ scaling for Monte Carlo (MC) simulations. For significances greater than about $4sigma$ in both a toy problem and a simplified resonance search, we show that NS requires orders of magnitude fewer simulations than ordinary MC estimates. This is particularly relevant for high-energy physics, which adopts a $5sigma$ gold standard for discovery. We conclude with remarks on new connections between Bayesian and frequentist computation and possibilities for tuning NS implementations for still better performance in this setting.
In this work we verify the sufficiency of a Jensens necessary and sufficient condition for a class of genus 0 or 1 entire functions to have only real zeros. They are Fourier transforms of even, positive, indefinitely differentiable, and very fast decreasing functions. We also apply our result to several important special functions in mathematics, such as modified Bessel function $K_{iz}(a), a>0$ as a function of variable $z$, Riemann Xi function $Xi(z)$, and character Xi function $Xi(z;chi)$ when $chi$ is a real primitive non-principal character satisfying $varphi(u;chi)ge0$ on the real line, we prove these entire functions have only real zeros.
We present a new method, based on Gaussian process regression, for reconstructing the continuous $x$-dependence of parton distribution functions (PDFs) from quasi-PDFs computed using lattice QCD. We examine the origin of the unphysical oscillations seen in current lattice calculations of quasi-PDFs and develop a nonparametric fitting approach to take the required Fourier transform. The method is tested on one ensemble of maximally twisted mass fermions with two light quarks. We find that with our approach oscillations of the quasi-PDF are drastically reduced. However, the final effect on the light-cone PDFs is small. This finding suggests that the deviation seen between current lattice QCD results and phenomenological determinations cannot be attributed solely on the Fourier transform.