No Arabic abstract
We study the limiting behavior of interacting particle systems indexed by large sparse graphs, which evolve either according to a discrete time Markov chain or a diffusion, in which particles interact directly only with their nearest neighbors in the graph. To encode sparsity we work in the framework of local weak convergence of marked (random) graphs. We show that the joint law of the particle system varies continuously with respect to local weak convergence of the underlying graph marked with the initial conditions. In addition, we show that the global empirical measure converges to a non-random limit for a large class of graph sequences including sparse Erd{o}s-R{e}nyi graphs and configuration models, whereas the empirical measure of the connected component of a uniformly random vertex converges to a random limit. Along the way, we develop some related results on the time-propagation of ergodicity and empirical field convergence, as well as some general results on local weak convergence of Gibbs measures in the uniqueness regime which appear to be new. The results obtained here are also useful for obtaining autonomous descriptions of marginal dynamics of interacting diffusions and Markov chains on sparse graphs. While limits of interacting particle systems on dense graphs have been extensively studied, there are relatively few works that have studied the sparse regime in generality.
In this article we formalize the problem of modeling social networks into a measure-valued process and interacting particle system. We obtain a model that describes in continuous time each vertex of the graph at a latent spatial state as a Dirac measure. We describe the model and its formal design as a Markov process on finite and connected geometric graphs with values in path space. A careful analysis of some microscopic properties of the underlying process is provided. Moreover, we study the long time behavior of the stochastic particle system. Using a renormalization technique, which has the effect that the density of the vertices must grow to infinity, we show that the rescaled measure-valued process converges in law towards the solution of a deterministic equation. The strength of our general continuous time and measure-valued dynamical system is that their results are context-free, that is, that hold for arbitrary sequences of graphs.
This paper provides convergence analysis for the approximation of a class of path-dependent functionals underlying a continuous stochastic process. In the first part, given a sequence of weak convergent processes, we provide a sufficient condition for the convergence of the path-dependent functional underlying weak convergent processes to the functional of the original process. In the second part, we study the weak convergence of Markov chain approximation to the underlying process when it is given by a solution of stochastic differential equation. Finally, we combine the results of the two parts to provide approximation of option pricing for discretely monitoring barrier option underlying stochastic volatility model. Different from the existing literatures, the weak convergence analysis is obtained by means of metric computations in the Skorohod topology together with the continuous mapping theorem. The advantage of this approach is that the functional under study may be a function of stopping times, projection of the underlying diffusion on a sequence of random times, or maximum/minimum of the underlying diffusion.
In this paper, we investigate the weak convergence rate of Euler-Maruyamas approximation for stochastic differential equations with irregular drifts. Explicit weak convergence rates are presented if drifts satisfy an integrability condition including discontinuous functions which can be non-piecewise continuous or in fractional Sobolev space.
In this paper, we prove convergence in distribution of Langevin processes in the overdamped asymptotics. The proof relies on the classical perturbed test function (or corrector) method, which is used both to show tightness in path space, and to identify the extracted limit with a martingale problem. The result holds assuming the continuity of the gradient of the potential energy, and a mild control of the initial kinetic energy.
Gaussian processes are distributions over functions that are versatile and mathematically convenient priors in Bayesian modelling. However, their use is often impeded for data with large numbers of observations, $N$, due to the cubic (in $N$) cost of matrix operations used in exact inference. Many solutions have been proposed that rely on $M ll N$ inducing variables to form an approximation at a cost of $mathcal{O}(NM^2)$. While the computational cost appears linear in $N$, the true complexity depends on how $M$ must scale with $N$ to ensure a certain quality of the approximation. In this work, we investigate upper and lower bounds on how $M$ needs to grow with $N$ to ensure high quality approximations. We show that we can make the KL-divergence between the approximate model and the exact posterior arbitrarily small for a Gaussian-noise regression model with $Mll N$. Specifically, for the popular squared exponential kernel and $D$-dimensional Gaussian distributed covariates, $M=mathcal{O}((log N)^D)$ suffice and a method with an overall computational cost of $mathcal{O}(N(log N)^{2D}(loglog N)^2)$ can be used to perform inference.