ترغب بنشر مسار تعليمي؟ اضغط هنا

Simulating individual-based models of bacterial chemotaxis with asymptotic variance reduction

141   0   0.0 ( 0 )
 نشر من قبل Mathias Rousset
 تاريخ النشر 2011
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

We discuss variance reduced simulations for an individual-based model of chemotaxis of bacteria with internal dynamics. The variance reduction is achieved via a coupling of this model with a simpler process in which the internal dynamics has been replaced by a direct gradient sensing of the chemoattractants concentrations. In the companion paper cite{limits}, we have rigorously shown, using a pathwise probabilistic technique, that both processes converge towards the same advection-diffusion process in the diffusive asymptotics. In this work, a direct coupling is achieved between paths of individual bacteria simulated by both models, by using the same sets of random numbers in both simulations. This coupling is used to construct a hybrid scheme with reduced variance. We first compute a deterministic solution of the kinetic density description of the direct gradient sensing model; the deviations due to the presence of internal dynamics are then evaluated via the coupled individual-based simulations. We show that the resulting variance reduction is emph{asymptotic}, in the sense that, in the diffusive asymptotics, the difference between the two processes has a variance which vanishes according to the small parameter.

قيم البحث

اقرأ أيضاً

We discuss velocity-jump models for chemotaxis of bacteria with an internal state that allows the velocity jump rate to depend on the memory of the chemoattractant concentration along their path of motion. Using probabilistic techniques, we provide a pathwise result that shows that the considered process converges to an advection-diffusion process in the (long-time) diffusion limit. We also (re-)prove using the same approach that the same limiting equation arises for a related, simpler process with direct sensing of the chemoattractant gradient. Additionally, we propose a time discretization technique that retains these diffusion limits exactly, i.e., without error that depends on the time discretization. In the companion paper cite{variance}, these results are used to construct a coupling technique that allows numerical simulation of the process with internal state with asymptotic variance reduction, in the sense that the variance vanishes in the diffusion limit.
28 - X Blanc 2018
This article is devoted to the design of importance sampling method for the Monte Carlo simulation of a linear transport equation. This model is of great importance in the simulation of inertial confinement fusion experiments. Our method is restricte d to a spherically symmetric idealized design : an outer sphere emitting radiation towards an inner sphere, which in practice should be thought of as the hohlraum and the fusion capsule, respectively. We compute the importance function as the solution of the corresponding stationary adjoint problem. Doing so, we have an important reduction of the variance (by a factor 50 to 100), with a moderate increase of computational cost (by a factor 2 to 8).
Alternating Direction Method of Multipliers (ADMM) is a popular method in solving Machine Learning problems. Stochastic ADMM was firstly proposed in order to reduce the per iteration computational complexity, which is more suitable for big data probl ems. Recently, variance reduction techniques have been integrated with stochastic ADMM in order to get a fast convergence rate, such as SAG-ADMM and SVRG-ADMM,but the convergence is still suboptimal w.r.t the smoothness constant. In this paper, we propose a new accelerated stochastic ADMM algorithm with variance reduction, which enjoys a faster convergence than all the other stochastic ADMM algorithms. We theoretically analyze its convergence rate and show its dependence on the smoothness constant is optimal. We also empirically validate its effectiveness and show its priority over other stochastic ADMM algorithms.
The trace of a matrix function f(A), most notably of the matrix inverse, can be estimated stochastically using samples< x,f(A)x> if the components of the random vectors x obey an appropriate probability distribution. However such a Monte-Carlo sampli ng suffers from the fact that the accuracy depends quadratically of the samples to use, thus making higher precision estimation very costly. In this paper we suggest and investigate a multilevel Monte-Carlo approach which uses a multigrid hierarchy to stochastically estimate the trace. This results in a substantial reduction of the variance, so that higher precision can be obtained at much less effort. We illustrate this for the trace of the inverse using three different classes of matrices.
In this work, we propose a distributed algorithm for stochastic non-convex optimization. We consider a worker-server architecture where a set of $K$ worker nodes (WNs) in collaboration with a server node (SN) jointly aim to minimize a global, potenti ally non-convex objective function. The objective function is assumed to be the sum of local objective functions available at each WN, with each node having access to only the stochastic samples of its local objective function. In contrast to the existing approaches, we employ a momentum based single loop distributed algorithm which eliminates the need of computing large batch size gradients to achieve variance reduction. We propose two algorithms one with adaptive and the other with non-adaptive learning rates. We show that the proposed algorithms achieve the optimal computational complexity while attaining linear speedup with the number of WNs. Specifically, the algorithms reach an $epsilon$-stationary point $x_a$ with $mathbb{E}| abla f(x_a) | leq tilde{O}(K^{-1/3}T^{-1/2} + K^{-1/3}T^{-1/3})$ in $T$ iterations, thereby requiring $tilde{O}(K^{-1} epsilon^{-3})$ gradient computations at each WN. Moreover, our approach does not assume identical data distributions across WNs making the approach general enough for federated learning applications.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا