No Arabic abstract
The speed of many one-line transformation methods for the production of, for example, Levy alpha-stable random numbers, which generalize Gaussian ones, and Mittag-Leffler random numbers, which generalize exponential ones, is very high and satisfactory for most purposes. However, for the class of decreasing probability densities fast rejection implementations like the Ziggurat by Marsaglia and Tsang promise a significant speed-up if it is possible to complement them with a method that samples the tails of the infinite support. This requires the fast generation of random numbers greater or smaller than a certain value. We present a method to achieve this, and also to generate random numbers within any arbitrary interval. We demonstrate the method showing the properties of the transform maps of the above mentioned distributions as examples of stable and geometric stable random numbers used for the stochastic solution of the space-time fractional diffusion equation.
We present a rejection method based on recursive covering of the probability density function with equal tiles. The concept works for any probability density function that is pointwise computable or representable by tabular data. By the implicit construction of piecewise constant majorizing and minorizing functions that are arbitrarily close to the density function the production of random variates is arbitrarily independent of the computation of the density function and extremely fast. The method works unattended for probability densities with discontinuities (jumps and poles). The setup time is short, marginally independent of the shape of the probability density and linear in table size. Recently formulated requirements to a general and automatic non-uniform random number generator are topped. We give benchmarks together with a similar rejection method and with a transformation method.
We study how the presence of correlations in physical variables contributes to the form of probability distributions. We investigate a process with correlations in the variance generated by (i) a Gaussian or (ii) a truncated L{e}vy distribution. For both (i) and (ii), we find that due to the correlations in the variance, the process ``dynamically generates power-law tails in the distributions, whose exponents can be controlled through the way the correlations in the variance are introduced. For (ii), we find that the process can extend a truncated distribution {it beyond the truncation cutoff}, which leads to a crossover between a L{e}vy stable power law and the present ``dynamically-generated power law. We show that the process can explain the crossover behavior recently observed in the $S&P500$ stock index.
The Humbert-Bessel are multi-index functions with various applications in electromagnetism. New families of functions sharing some similarities with Bessel functions are often introduced in the mathematical literature, but at a closer analysis they are not new, in the strict sense of the word, and are shown to be expressible in terms of already discussed forms. This is indeed the case of the re-modified Bessel functions, whose properties have been analyzed within the context of coincidence problems in probability theory. In this paper we show that these functions are particular cases of the Humbert-Bessel ones.
We present the umbrella sampling (US) technique and show that it can be used to sample extremely low probability areas of the posterior distribution that may be required in statistical analyses of data. In this approach sampling of the target likelihood is split into sampling of multiple biased likelihoods confined within individual umbrella windows. We show that the US algorithm is efficient and highly parallel and that it can be easily used with other existing MCMC samplers. The method allows the user to capitalize on their intuition and define umbrella windows and increase sampling accuracy along specific directions in the parameter space. Alternatively, one can define umbrella windows using an approach similar to parallel tempering. We provide a public code that implements umbrella sampling as a standalone python package. We present a number of tests illustrating the power of the US method in sampling low probability areas of the posterior and show that this ability allows a considerably more robust sampling of multi-modal distributions compared to the standard sampling methods. We also present an application of the method in a real world example of deriving cosmological constraints using the supernova type Ia data. We show that umbrella sampling can sample the posterior accurately down to the $approx 15sigma$ credible region in the $Omega_{rm m}-Omega_Lambda$ plane, while for the same computational work the affine-invariant MCMC sampling implemented in the {tt emcee} code samples the posterior reliably only to $approx 3sigma$.
Literate computing has emerged as an important tool for computational studies and open science, with growing folklore of best practices. In this work, we report two case studies - one in computational magnetism and another in computational mathematics - where domain-specific software was exposed to the Jupyter environment. This enables high-level control of simulations and computation, interactive exploration of computational results, batch processing on HPC resources, and reproducible workflow documentation in Jupyter notebooks. In the first study, Ubermag drives existing computational micromagnetics software through a domain-specific language embedded in Python. In the second study, a dedicated Jupyter kernel interfaces with the GAP system for computational discrete algebra and its dedicated programming language. In light of these case studies, we discuss the benefits of this approach, including progress toward more reproducible and reusable research results and outputs, notably through the use of infrastructure such as JupyterHub and Binder.