No Arabic abstract
To forecast the time dynamics of an epidemic, we propose a discrete stochastic model that unifies and generalizes previous approaches to the subject. Viewing a given population of individuals or groups of individuals with given health state attributes as living in and moving between the nodes of a graph, we use Monte-Carlo Markov Chain techniques to simulate the movements and health state changes of the individuals according to given probabilities of stay that have been preassigned to each of the nodes. We utilize this model to either capture and predict the future geographic evolution of an epidemic in time, or the evolution of an epidemic inside a heterogeneous population which is divided into homogeneous sub-populations, or, more generally, its evolution in a combination or superposition of the previous two contexts. We also prove that when the size of the population increases and a natural hypothesis is satisfied, the stochastic process associated to our model converges to a deterministic process. Indeed, when the length of the time step used in the discrete model converges to zero, in the limit this deterministic process is driven by a differential equation yielding the evolution of the expectation value of the number of infected as a function of time. In the second part of the paper, we apply our model to study the evolution of the Covid-19 epidemic. We deduce a decomposition of the function yielding the number of infectious individuals into wavelets, which allows to trace in time the expectation value for the number of infections inside each sub-population. Within this framework, we also discuss possible causes for the occurrence of multiple epidemiological waves.
Delayed-acceptance Markov chain Monte Carlo (DA-MCMC) samples from a probability distribution via a two-stages version of the Metropolis-Hastings algorithm, by combining the target distribution with a surrogate (i.e. an approximate and computationally cheaper version) of said distribution. DA-MCMC accelerates MCMC sampling in complex applications, while still targeting the exact distribution. We design a computationally faster, albeit approximate, DA-MCMC algorithm. We consider parameter inference in a Bayesian setting where a surrogate likelihood function is introduced in the delayed-acceptance scheme. When the evaluation of the likelihood function is computationally intensive, our scheme produces a 2-4 times speed-up, compared to standard DA-MCMC. However, the acceleration is highly problem dependent. Inference results for the standard delayed-acceptance algorithm and our approximated version are similar, indicating that our algorithm can return reliable Bayesian inference. As a computationally intensive case study, we introduce a novel stochastic differential equation model for protein folding data.
In this paper we present ACEMod, an agent-based modelling framework for studying influenza epidemics in Australia. The simulator is designed to analyse the spatiotemporal spread of contagion and influenza spatial synchrony across the nation. The individual-based epidemiological model accounts for mobility (worker and student commuting) patterns and human interactions derived from the 2006 Australian census and other national data sources. The high-precision simulation comprises 19.8 million stochastically generated software agents and traces the dynamics of influenza viral infection and transmission at several scales. Using this approach, we are able to synthesise epidemics in Australia with varying outbreak locations and severity. For each scenario, we investigate the spatiotemporal profiles of these epidemics, both qualitatively and quantitatively, via incidence curves, prevalence choropleths, and epidemic synchrony. This analysis exemplifies the nature of influenza pandemics within Australia and facilitates future planning of effective intervention, mitigation and crisis management strategies.
An important task in machine learning and statistics is the approximation of a probability measure by an empirical measure supported on a discrete point set. Stein Points are a class of algorithms for this task, which proceed by sequentially minimising a Stein discrepancy between the empirical measure and the target and, hence, require the solution of a non-convex optimisation problem to obtain each new point. This paper removes the need to solve this optimisation problem by, instead, selecting each new point based on a Markov chain sample path. This significantly reduces the computational cost of Stein Points and leads to a suite of algorithms that are straightforward to implement. The new algorithms are illustrated on a set of challenging Bayesian inference problems, and rigorous theoretical guarantees of consistency are established.
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers. Like related methods, iPMCMC is a Markov chain Monte Carlo sampler on an extended space. We present empirical results that show significant improvements in mixing rates relative to both non-interacting PMCMC samplers, and a single PMCMC sampler with an equivalent memory and computational budget. An additional advantage of the iPMCMC method is that it is suitable for distributed and multi-core architectures.
A novel class of non-reversible Markov chain Monte Carlo schemes relying on continuous-time piecewise-deterministic Markov Processes has recently emerged. In these algorithms, the state of the Markov process evolves according to a deterministic dynamics which is modified using a Markov transition kernel at random event times. These methods enjoy remarkable features including the ability to update only a subset of the state components while other components implicitly keep evolving and the ability to use an unbiased estimate of the gradient of the log-target while preserving the target as invariant distribution. However, they also suffer from important limitations. The deterministic dynamics used so far do not exploit the structure of the target. Moreover, exact simulation of the event times is feasible for an important yet restricted class of problems and, even when it is, it is application specific. This limits the applicability of these techniques and prevents the development of a generic software implementation of them. We introduce novel MCMC methods addressing these shortcomings. In particular, we introduce novel continuous-time algorithms relying on exact Hamiltonian flows and novel non-reversible discrete-time algorithms which can exploit complex dynamics such as approximate Hamiltonian dynamics arising from symplectic integrators while preserving the attractive features of continuous-time algorithms. We demonstrate the performance of these schemes on a variety of applications.