Do you want to publish a course? Click here

Distributed Local Linear Parameter Estimation using Gaussian SPAWN

401   0   0.0 ( 0 )
 Added by Mei Leng
 Publication date 2014
and research's language is English




Ask ChatGPT about the research

We consider the problem of estimating local sensor parameters, where the local parameters and sensor observations are related through linear stochastic models. Sensors exchange messages and cooperate with each other to estimate their own local parameters iteratively. We study the Gaussian Sum-Product Algorithm over a Wireless Network (gSPAWN) procedure, which is based on belief propagation, but uses fixed size broadcast messages at each sensor instead. Compared with the popular diffusion strategies for performing network parameter estimation, whose communication cost at each sensor increases with increasing network density, the gSPAWN algorithm allows sensors to broadcast a message whose size does not depend on the network size or density, making it more suitable for applications in wireless sensor networks. We show that the gSPAWN algorithm converges in mean and has mean-square stability under some technical sufficient conditions, and we describe an application of the gSPAWN algorithm to a network localization problem in non-line-of-sight environments. Numerical results suggest that gSPAWN converges much faster in general than the diffusion method, and has lower communication costs, with comparable root mean square errors.

rate research

Read More

The paper studies distributed static parameter (vector) estimation in sensor networks with nonlinear observation models and noisy inter-sensor communication. It introduces emph{separably estimable} observation models that generalize the observability condition in linear centralized estimation to nonlinear distributed estimation. It studies two distributed estimation algorithms in separably estimable models, the $mathcal{NU}$ (with its linear counterpart $mathcal{LU}$) and the $mathcal{NLU}$. Their update rule combines a emph{consensus} step (where each sensor updates the state by weight averaging it with its neighbors states) and an emph{innovation} step (where each sensor processes its local current observation.) This makes the three algorithms of the textit{consensus + innovations} type, very different from traditional consensus. The paper proves consistency (all sensors reach consensus almost surely and converge to the true parameter value,) efficiency, and asymptotic unbiasedness. For $mathcal{LU}$ and $mathcal{NU}$, it proves asymptotic normality and provides convergence rate guarantees. The three algorithms are characterized by appropriately chosen decaying weight sequences. Algorithms $mathcal{LU}$ and $mathcal{NU}$ are analyzed in the framework of stochastic approximation theory; algorithm $mathcal{NLU}$ exhibits mixed time-scale behavior and biased perturbations, and its analysis requires a different approach that is developed in the paper.
Bayesian analysis is a framework for parameter estimation that applies even in uncertainty regimes where the commonly used local (frequentist) analysis based on the Cramer-Rao bound is not well defined. In particular, it applies when no initial information about the parameter value is available, e.g., when few measurements are performed. Here, we consider three paradigmatic estimation schemes in continuous-variable quantum metrology (estimation of displacements, phases, and squeezing strengths) and analyse them from the Bayesian perspective. For each of these scenarios, we investigate the precision achievable with single-mode Gaussian states under homodyne and heterodyne detection. This allows us to identify Bayesian estimation strategies that combine good performance with the potential for straightforward experimental realization in terms of Gaussian states and measurements. Our results provide practical solutions for reaching uncertainties where local estimation techniques apply, thus bridging the gap to regimes where asymptotically optimal strategies can be employed.
112 - Olivier Pinel , Pu Jian 2013
We calculate the quantum Cramer--Rao bound for the sensitivity with which one or several parameters, encoded in a general single-mode Gaussian state, can be estimated. This includes in particular the interesting case of mixed Gaussian states. We apply the formula to the problems of estimating phase, purity, loss, amplitude, and squeezing. In the case of the simultaneous measurement of several parameters, we provide the full quantum Fisher information matrix. Our results unify previously known partial results, and constitute a complete solution to the problem of knowing the best possible sensitivity of measurements based on a single-mode Gaussian state.
Folding uncertainty in theoretical models into Bayesian parameter estimation is necessary in order to make reliable inferences. A general means of achieving this is by marginalizing over model uncertainty using a prior distribution constructed using Gaussian process regression (GPR). As an example, we apply this technique to the measurement of chirp mass using (simulated) gravitational-wave signals from binary black holes that could be observed using advanced-era gravitational-wave detectors. Unless properly accounted for, uncertainty in the gravitational-wave templates could be the dominant source of error in studies of these systems. We explain our approach in detail and provide proofs of various features of the method, including the limiting behavior for high signal-to-noise, where systematic model uncertainties dominate over noise errors. We find that the marginalized likelihood constructed via GPR offers a significant improvement in parameter estimation over the standard, uncorrected likelihood both in our simple one-dimensional study, and theoretically in general. We also examine the dependence of the method on the size of training set used in the GPR; on the form of covariance function adopted for the GPR, and on changes to the detector noise power spectral density.
We study the problem of estimating the parameters (i.e., infection rate and recovery rate) governing the spread of epidemics in networks. Such parameters are typically estimated by measuring various characteristics (such as the number of infected and recovered individuals) of the infected populations over time. However, these measurements also incur certain costs, depending on the population being tested and the times at which the tests are administered. We thus formulate the epidemic parameter estimation problem as an optimization problem, where the goal is to either minimize the total cost spent on collecting measurements, or to optimize the parameter estimates while remaining within a measurement budget. We show that these problems are NP-hard to solve in general, and then propose approximation algorithms with performance guarantees. We validate our algorithms using numerical examples.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا