ترغب بنشر مسار تعليمي؟ اضغط هنا

A Bayesian algorithm for distributed network localization using distance and direction data

60   0   0.0 ( 0 )
 نشر من قبل Hassan Naseri
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

A reliable, accurate, and affordable positioning service is highly required in wireless networks. In this paper, the novel Message Passing Hybrid Localization (MPHL) algorithm is proposed to solve the problem of cooperative distributed localization using distance and direction estimates. This hybrid approach combines two sensing modalities to reduce the uncertainty in localizing the network nodes. A statistical model is formulated for the problem, and approximate minimum mean square error (MMSE) estimates of the node locations are computed. The proposed MPHL is a distributed algorithm based on belief propagation (BP) and Markov chain Monte Carlo (MCMC) sampling. It improves the identifiability of the localization problem and reduces its sensitivity to the anchor node geometry, compared to distance-only or direction-only localization techniques. For example, the unknown location of a node can be found if it has only a single neighbor; and a whole network can be localized using only a single anchor node. Numerical results are presented showing that the average localization error is significantly reduced in almost every simulation scenario, about 50% in most cases, compared to the competing algorithms.



قيم البحث

اقرأ أيضاً

We consider nonparametric sequential hypothesis testing problem when the distribution under the null hypothesis is fully known but the alternate hypothesis corresponds to some other unknown distribution with some loose constraints. We propose a simpl e algorithm to address the problem. These problems are primarily motivated from wireless sensor networks and spectrum sensing in Cognitive Radios. A decentralized version utilizing spatial diversity is also proposed. Its performance is analysed and asymptotic properties are proved. The simulated and analysed performance of the algorithm is compared with an earlier algorithm addressing the same problem with similar assumptions. We also modify the algorithm for optimizing performance when information about the prior probabilities of occurrence of the two hypotheses are known.
In this paper, we disclose the statistical behavior of the max-product algorithm configured to solve a maximum a posteriori (MAP) estimation problem in a network of distributed agents. Specifically, we first build a distributed hypothesis test conduc ted by a max-product iteration over a binary-valued pairwise Markov random field and show that the decision variables obtained are linear combinations of the local log-likelihood ratios observed in the network. Then, we use these linear combinations to formulate the system performance in terms of the false-alarm and detection probabilities. Our findings indicate that, in the hypothesis test concerned, the optimal performance of the max-product algorithm is obtained by an optimal linear data-fusion scheme and the behavior of the max-product algorithm is very similar to the behavior of the sum-product algorithm. Consequently, we demonstrate that the optimal performance of the max-product iteration is closely achieved via a linear version of the sum-product algorithm which is optimized based on statistics received at each node from its one-hop neighbors. Finally, we verify our observations via computer simulations.
This paper introduces a framework for regression with dimensionally distributed data with a fusion center. A cooperative learning algorithm, the iterative conditional expectation algorithm (ICEA), is designed within this framework. The algorithm can effectively discover linear combinations of individual estimators trained by each agent without transferring and storing large amount of data amongst the agents and the fusion center. The convergence of ICEA is explored. Specifically, for a two agent system, each complete round of ICEA is guaranteed to be a non-expansive map on the function space of each agent. The advantages and limitations of ICEA are also discussed for data sets with various distributions and various hidden rules. Moreover, several techniques are also designed to leverage the algorithm to effectively learn more complex hidden rules that are not linearly decomposable.
In this paper, we investigate distributed inference schemes, over binary-valued Markov random fields, which are realized by the belief propagation (BP) algorithm. We first show that a decision variable obtained by the BP algorithm in a network of dis tributed agents can be approximated by a linear fusion of all the local log-likelihood ratios. The proposed approach clarifies how the BP algorithm works, simplifies the statistical analysis of its behavior, and enables us to develop a performance optimization framework for the BP-based distributed inference systems. Next, we propose a blind learning-adaptation scheme to optimize the system performance when there is no information available a priori describing the statistical behavior of the wireless environment concerned. In addition, we propose a blind threshold adaptation method to guarantee a certain performance level in a BP-based distributed detection system. To clarify the points discussed, we design a novel linear-BP-based distributed spectrum sensing scheme for cognitive radio networks and illustrate the performance improvement obtained, over an existing BP-based detection method, via computer simulations.
We study the secrecy of a distributed storage system for passwords. The encoder, Alice, observes a length-n password and describes it using two hints, which she then stores in different locations. The legitimate receiver, Bob, observes both hints. Th e eavesdropper, Eve, sees only one of the hints; Alice cannot control which. We characterize the largest normalized (by n) exponent that we can guarantee for the number of guesses it takes Eve to guess the password subject to the constraint that either the number of guesses it takes Bob to guess the password or the size of the list that Bob must form to guarantee that it contain the password approach 1 as n tends to infinity.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا