ترغب بنشر مسار تعليمي؟ اضغط هنا

Uniform {varepsilon}-Stability of Distributed Nonlinear Filtering over DNAs: Gaussian-Finite HMMs

57   0   0.0 ( 0 )
 نشر من قبل Dionysios Kalogerias
 تاريخ النشر 2016
  مجال البحث
والبحث باللغة English




اسأل ChatGPT حول البحث

In this work, we study stability of distributed filtering of Markov chains with finite state space, partially observed in conditionally Gaussian noise. We consider a nonlinear filtering scheme over a Distributed Network of Agents (DNA), which relies on the distributed evaluation of the likelihood part of the centralized nonlinear filter and is based on a particular specialization of the Alternating Direction Method of Multipliers (ADMM) for fast average consensus. Assuming the same number of consensus steps between any two consecutive noisy measurements for each sensor in the network, we fully characterize a minimal number of such steps, such that the distributed filter remains uniformly stable with a prescribed accuracy level, {varepsilon} in (0,1], within a finite operational horizon, T, and across all sensors. Stability is in the sense of the ell_1-norm between the centralized and distribut



قيم البحث

اقرأ أيضاً

In this paper, we consider an anticipative nonlinear filtering problem, in which the observation noise is correlated with the past of the signal. This new signal-observation model has its applications in both finance models with insider trading and i n engineering. We derive a new equation for the filter in this context, analyzing both the nonlinear and the linear cases. We also handle the case of a finite filter with Volterra type observation. The performance of our algorithm is presented through numerical experiments.
The classical asymptotic theory for parametric $M$-estimators guarantees that, in the limit of infinite sample size, the excess risk has a chi-square type distribution, even in the misspecified case. We demonstrate how self-concordance of the loss al lows to characterize the critical sample size sufficient to guarantee a chi-square type in-probability bound for the excess risk. Specifically, we consider two classes of losses: (i) self-concordant losses in the classical sense of Nesterov and Nemirovski, i.e., whose third derivative is uniformly bounded with the $3/2$ power of the second derivative; (ii) pseudo self-concordant losses, for which the power is removed. These classes contain losses corresponding to several generalized linear models, including the logistic loss and pseudo-Huber losses. Our basic result under minimal assumptions bounds the critical sample size by $O(d cdot d_{text{eff}}),$ where $d$ the parameter dimension and $d_{text{eff}}$ the effective dimension that accounts for model misspecification. In contrast to the existing results, we only impose local assumptions that concern the population risk minimizer $theta_*$. Namely, we assume that the calibrated design, i.e., design scaled by the square root of the second derivative of the loss, is subgaussian at $theta_*$. Besides, for type-ii losses we require boundedness of a certain measure of curvature of the population risk at $theta_*$.Our improved result bounds the critical sample size from above as $O(max{d_{text{eff}}, d log d})$ under slightly stronger assumptions. Namely, the local assumptions must hold in the neighborhood of $theta_*$ given by the Dikin ellipsoid of the population risk. Interestingly, we find that, for logistic regression with Gaussian design, there is no actual restriction of conditions: the subgaussian parameter and curvature measure remain near-constant over the Dikin ellipsoid. Finally, we extend some of these results to $ell_1$-penalized estimators in high dimensions.
Hogwild! implements asynchronous Stochastic Gradient Descent (SGD) where multiple threads in parallel access a common repository containing training data, perform SGD iterations and update shared state that represents a jointly learned (global) model . We consider big data analysis where training data is distributed among local data sets in a heterogeneous way -- and we wish to move SGD computations to local compute nodes where local data resides. The results of these local SGD computations are aggregated by a central aggregator which mimics Hogwild!. We show how local compute nodes can start choosing small mini-batch sizes which increase to larger ones in order to reduce communication cost (round interaction with the aggregator). We improve state-of-the-art literature and show $O(sqrt{K}$) communication rounds for heterogeneous data for strongly convex problems, where $K$ is the total number of gradient computations across all local compute nodes. For our scheme, we prove a textit{tight} and novel non-trivial convergence analysis for strongly convex problems for {em heterogeneous} data which does not use the bounded gradient assumption as seen in many existing publications. The tightness is a consequence of our proofs for lower and upper bounds of the convergence rate, which show a constant factor difference. We show experimental results for plain convex and non-convex problems for biased (i.e., heterogeneous) and unbiased local data sets.
In many large systems, such as those encountered in biology or economics, the dynamics are nonlinear and are only known very coarsely. It is often the case, however, that the signs (excitation or inhibition) of individual interactions are known. This paper extends to nonlinear systems the classical criteria of linear sign stability introduced in the 70s, yielding simple sufficient conditions to determine stability using only the sign patterns of the interactions.
We revisit the development of grid based recursive approximate filtering of general Markov processes in discrete time, partially observed in conditionally Gaussian noise. The grid based filters considered rely on two types of state quantization: The textit{Markovian} type and the textit{marginal} type. We propose a set of novel, relaxed sufficient conditions, ensuring strong and fully characterized pathwise convergence of these filters to the respective MMSE state estimator. In particular, for marginal state quantizations, we introduce the notion of textit{conditional regularity of stochastic kernels}, which, to the best of our knowledge, constitutes the most relaxed condition proposed, under which asymptotic optimality of the respective grid based filters is guaranteed. Further, we extend our convergence results, including filtering of bounded and continuous functionals of the state, as well as recursive approximate state prediction. For both Markovian and marginal quantizations, the whole development of the respective grid based filters relies more on linear-algebraic techniques and less on measure theoretic arguments, making the presentation considerably shorter and technically simpler.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا