Performance Limits for Distributed Estimation Over LMS Adaptive Networks


الملخص بالإنكليزية

In this work we analyze the mean-square performance of different strategies for distributed estimation over least-mean-squares (LMS) adaptive networks. The results highlight some useful properties for distributed adaptation in comparison to fusion-based centralized solutions. The analysis establishes that, by optimizing over the combination weights, diffusion strategies can deliver lower excess-mean-square-error than centralized solutions employing traditional block or incremental LMS strategies. We first study in some detail the situation involving combinations of two adaptive agents and then extend the results to generic N-node ad-hoc networks. In the later case, we establish that, for sufficiently small step-sizes, diffusion strategies can outperform centralized block or incremental LMS strategies by optimizing over left-stochastic combination weighting matrices. The results suggest more efficient ways for organizing and processing data at fusion centers, and present useful adaptive strategies that are able to enhance performance when implemented in a distributed manner.

تحميل البحث