No Arabic abstract
We introduce a stable, well tested Python implementation of the affine-invariant ensemble sampler for Markov chain Monte Carlo (MCMC) proposed by Goodman & Weare (2010). The code is open source and has already been used in several published projects in the astrophysics literature. The algorithm behind emcee has several advantages over traditional MCMC sampling methods and it has excellent performance as measured by the autocorrelation time (or function calls per independent sample). One major advantage of the algorithm is that it requires hand-tuning of only 1 or 2 parameters compared to $sim N^2$ for a traditional algorithm in an N-dimensional parameter space. In this document, we describe the algorithm and the details of our implementation and API. Exploiting the parallelism of the ensemble method, emcee permits any user to take advantage of multiple CPU cores without extra effort. The code is available online at http://dan.iel.fm/emcee under the MIT License.
emcee is a Python library implementing a class of affine-invariant ensemble samplers for Markov chain Monte Carlo (MCMC). This package has been widely applied to probabilistic modeling problems in astrophysics where it was originally published, with some applications in other fields. When it was first released in 2012, the interface implemented in emcee was fundamentally different from the MCMC libraries that were popular at the time, such as PyMC, because it was specifically designed to work with black box models instead of structured graphical models. This has been a popular interface for applications in astrophysics because it is often non-trivial to implement realistic physics within the modeling frameworks required by other libraries. Since emcees release, other libraries have been developed with similar interfaces, such as dynesty (Speagle 2019). The version 3.0 release of emcee is the first major release of the library in about 6 years and it includes a full re-write of the computational backend, several commonly requested features, and a set of new move implementations.
Recent $B$-physics results have sparkled great interest in the search for beyond-the-Standard-Model (BSM) physics in $bto cell bar{ u}$ transitions. The need to analyse in a consistent manner big datasets for these searches, using high-statistics Monte-Carlo (MC) samples, led to the development of HAMMER, a software tool which enables to perform a fast morphing of MC-derived templates to include BSM effects and/or alternative parameterisations of long-distance effects, avoiding the need to re-generate simulated samples. This note describes the development of RooHammerModel, an interface between this tool and the commonly-used data-fitting framework HistFactory. The code is written in C++ and admits an alternative usage in standalone RooFit analyses. In this document, the structure and functionality of the user interface are explained. Information of a public repository where it can be accessed is provided, as well as validation and performance studies of the interface. The methods developed in the construction of RooHammerModel can provide specific information for alternative future attempts to interface HAMMER with other data-fitting frameworks.
Vampire has been for a long time the strongest first-order automatic theorem prover, widely used for hammer-style proof automation in ITPs such as Mizar, Isabelle, HOL, and Coq. In this work, we considerably improve the performance of Vampire in hammering over the full Mizar library by enhancing its saturation procedure with efficient neural guidance. In particular, we employ a recently proposed recursive neural network classifying the generated clauses based only on their derivation history. Compared to previous neural methods based on considering the logical content of the clauses, our architecture makes evaluating a single clause much less time consuming. The resulting system shows good learning capability and improves on the state-of-the-art performance on the Mizar library, while proving many theorems that the related ENIGMA system could not prove in a similar hammering evaluation.
Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions. This local feature avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target, with a requirement on the number of simulations that grows with the dimension of the problem and with the complexity of the data behind it. Several techniques are available towards accelerating the convergence of these Monte Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian Monte Carlo and partly deterministic methods) or at the exploitation level (with Rao-Blackwellisation and scalable methods).
Rao-Blackwellization is a notion often occurring in the MCMC literature, with possibly different meanings and connections with the original Rao--Blackwell theorem (Rao, 1945 and Blackwell,1947), including a reduction of the variance of the resulting Monte Carlo approximations. This survey reviews some of the meanings of the term.