ترغب بنشر مسار تعليمي؟ اضغط هنا

The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy inter-sensor communication. It introduces emph{separably estimable} observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the $mathcal{NU}$ (with its linear counterpart $mathcal{LU}$) and the $mathcal{NLU}$. Their update rule combines a emph{consensus} step (where each sensor updates the state by weight averaging it with its neighbors states) and an emph{innovation} step (where each sensor processes its local current observation.) This makes the three algorithms of the textit{consensus + innovations} type, very different from traditional consensus. The paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value,) efficiency, and asymptotic unbiasedness. For $mathcal{LU}$ and $mathcal{NU}$, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms $mathcal{LU}$ and $mathcal{NU}$ are analyzed in the framework of stochastic approximation theory; algorithm $mathcal{NLU}$ exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in the paper.
The paper studies the problem of distributed average consensus in sensor networks with quantized data and random link failures. To achieve consensus, dither (small noise) is added to the sensor states before quantization. When the quantizer range is unbounded (countable number of quantizer levels), stochastic approximation shows that consensus is asymptotically achieved with probability one and in mean square to a finite random variable. We show that the meansquared error (m.s.e.) can be made arbitrarily small by tuning the link weight sequence, at a cost of the convergence rate of the algorithm. To study dithered consensus with random links when the range of the quantizer is bounded, we establish uniform boundedness of the sample paths of the unbounded quantizer. This requires characterization of the statistical properties of the supremum taken over the sample paths of the state of the quantizer. This is accomplished by splitting the state vector of the quantizer in two components: one along the consensus subspace and the other along the subspace orthogonal to the consensus subspace. The proofs use maximal inequalities for submartingale and supermartingale sequences. From these, we derive probability bounds on the excursions of the two subsequences, from which probability bounds on the excursions of the quantizer state vector follow. The paper shows how to use these probability bounds to design the quantizer parameters and to explore tradeoffs among the number of quantizer levels, the size of the quantization steps, the desired probability of saturation, and the desired level of accuracy $epsilon$ away from consensus. Finally, the paper illustrates the quantizer design with a numerical study.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا