ﻻ يوجد ملخص باللغة العربية
We present an importance sampling algorithm that can produce realisations of Markovian epidemic models that exactly match observations, taken to be the number of a single event type over a period of time. The importance sampling can be used to construct an efficient particle filter that targets the states of a system and hence estimate the likelihood to perform Bayesian parameter inference. When used in a particle marginal Metropolis Hastings scheme, the importance sampling provides a large speed-up in terms of the effective sample size per unit of computational time, compared to simple bootstrap sampling. The algorithm is general, with minimal restrictions, and we show how it can be applied to any discrete-state continuous-time Markov chain where we wish to exactly match the number of a single event type over a period of time.
We consider an evolving system for which a sequence of observations is being made, with each observation revealing additional information about current and past states of the system. We suppose each observation is made without error, but does not ful
We consider the problem of model choice for stochastic epidemic models given partial observation of a disease outbreak through time. Our main focus is on the use of Bayes factors. Although Bayes factors have appeared in the epidemic modelling literat
In model-based reinforcement learning, generative and temporal models of environments can be leveraged to boost agent performance, either by tuning the agents representations during training or via use as part of an explicit planning mechanism. Howev
The COVID-19 pandemic has demonstrated how disruptive emergent disease outbreaks can be and how useful epidemic models are for quantifying risks of local outbreaks. Here we develop an analytical approach to calculate the dynamics and likelihood of ou
We present a generic path-dependent importance sampling algorithm where the Girsanov induced change of probability on the path space is represented by a sequence of neural networks taking the past of the trajectory as an input. At each learning step,