No Arabic abstract
This paper continues our treatment of the Neutron Transport Equation (NTE) building on the work in [arXiv:1809.00827v2], [arXiv:1810.01779v4] and [arXiv:1901.00220v3], which describes the flux of neutrons through inhomogeneous fissile medium. Our aim is to analyse existing and novel Monte-Carlo (MC) algorithms, aimed at simulating the lead eigenvalue associated with the underlying model. This quantity is of principal importance in the nuclear regulatory industry for which the NTE must be solved on complicated inhomogenous domains corresponding to nuclear reactor cores, irradiative hospital equipment, food irradiation equipment and so on. We include a complexity analysis of such MC algorithms, noting that no such undertaking has previously appeared in the literature. The new MC algorithms offer a variety of advantages and disadvantages of accuracy vs cost, as well as the possibility of more convenient
In this article, we consider the preconditioned Hamiltonian Monte Carlo (pHMC) algorithm defined directly on an infinite-dimensional Hilbert space. In this context, and under a condition reminiscent of strong log-concavity of the target measure, we prove convergence bounds for adjusted pHMC in the standard 1-Wasserstein distance. The arguments rely on a synchronous coupling of two copies of pHMC, which is controlled by adapting elements from arXiv:1805.00452.
Inspired by recent progress in quantum algorithms for ordinary and partial differential equations, we study quantum algorithms for stochastic differential equations (SDEs). Firstly we provide a quantum algorithm that gives a quadratic speed-up for multilevel Monte Carlo methods in a general setting. As applications, we apply it to compute expectation values determined by classical solutions of SDEs, with improved dependence on precision. We demonstrate the use of this algorithm in a variety of applications arising in mathematical finance, such as the Black-Scholes and Local Volatility models, and Greeks. We also provide a quantum algorithm based on sublinear binomial sampling for the binomial option pricing model with the same improvement.
Practitioners wishing to experience the efficiency gains from using low discrepancy sequences need correct, well-written software. This article, based on our MCQMC 2020 tutorial, describes some of the better quasi-Monte Carlo (QMC) software available. We highlight the key software components required to approximate multivariate integrals or expectations of functions of vector random variables by QMC. We have combined these components in QMCPy, a Python open source library, which we hope will draw the support of the QMC community. Here we introduce QMCPy.
This position paper summarizes a recently developed research program focused on inference in the context of data centric science and engineering applications, and forecasts its trajectory forward over the next decade. Often one endeavours in this context to learn complex systems in order to make more informed predictions and high stakes decisions under uncertainty. Some key challenges which must be met in this context are robustness, generalizability, and interpretability. The Bayesian framework addresses these three challenges, while bringing with it a fourth, undesirable feature: it is typically far more expensive than its deterministic counterparts. In the 21st century, and increasingly over the past decade, a growing number of methods have emerged which allow one to leverage cheap low-fidelity models in order to precondition algorithms for performing inference with more expensive models and make Bayesian inference tractable in the context of high-dimensional and expensive models. Notable examples are multilevel Monte Carlo (MLMC), multi-index Monte Carlo (MIMC), and their randomized counterparts (rMLMC), which are able to provably achieve a dimension-independent (including $infty-$dimension) canonical complexity rate with respect to mean squared error (MSE) of $1/$MSE. Some parallelizability is typically lost in an inference context, but recently this has been largely recovered via novel double randomization approaches. Such an approach delivers i.i.d. samples of quantities of interest which are unbiased with respect to the infinite resolution target distribution. Over the coming decade, this family of algorithms has the potential to transform data centric science and engineering, as well as classical machine learning applications such as deep learning, by scaling up and scaling out fully Bayesian inference.
Random batch algorithms are constructed for quantum Monte Carlo simulations. The main objective is to alleviate the computational cost associated with the calculations of two-body interactions, including the pairwise interactions in the potential energy, and the two-body terms in the Jastrow factor. In the framework of variational Monte Carlo methods, the random batch algorithm is constructed based on the over-damped Langevin dynamics, so that updating the position of each particle in an $N$-particle system only requires $mathcal{O}(1)$ operations, thus for each time step the computational cost for $N$ particles is reduced from $mathcal{O}(N^2)$ to $mathcal{O}(N)$. For diffusion Monte Carlo methods, the random batch algorithm uses an energy decomposition to avoid the computation of the total energy in the branching step. The effectiveness of the random batch method is demonstrated using a system of liquid ${}^4$He atoms interacting with a graphite surface.