Do you want to publish a course? Click here

The Max-Product Algorithm Viewed as Linear Data-Fusion: A Distributed Detection Scenario

72   0   0.0 ( 0 )
 Added by Younes Abdi
 Publication date 2019
and research's language is English




Ask ChatGPT about the research

In this paper, we disclose the statistical behavior of the max-product algorithm configured to solve a maximum a posteriori (MAP) estimation problem in a network of distributed agents. Specifically, we first build a distributed hypothesis test conducted by a max-product iteration over a binary-valued pairwise Markov random field and show that the decision variables obtained are linear combinations of the local log-likelihood ratios observed in the network. Then, we use these linear combinations to formulate the system performance in terms of the false-alarm and detection probabilities. Our findings indicate that, in the hypothesis test concerned, the optimal performance of the max-product algorithm is obtained by an optimal linear data-fusion scheme and the behavior of the max-product algorithm is very similar to the behavior of the sum-product algorithm. Consequently, we demonstrate that the optimal performance of the max-product iteration is closely achieved via a linear version of the sum-product algorithm which is optimized based on statistics received at each node from its one-hop neighbors. Finally, we verify our observations via computer simulations.



rate research

Read More

In this paper, we investigate distributed inference schemes, over binary-valued Markov random fields, which are realized by the belief propagation (BP) algorithm. We first show that a decision variable obtained by the BP algorithm in a network of distributed agents can be approximated by a linear fusion of all the local log-likelihood ratios. The proposed approach clarifies how the BP algorithm works, simplifies the statistical analysis of its behavior, and enables us to develop a performance optimization framework for the BP-based distributed inference systems. Next, we propose a blind learning-adaptation scheme to optimize the system performance when there is no information available a priori describing the statistical behavior of the wireless environment concerned. In addition, we propose a blind threshold adaptation method to guarantee a certain performance level in a BP-based distributed detection system. To clarify the points discussed, we design a novel linear-BP-based distributed spectrum sensing scheme for cognitive radio networks and illustrate the performance improvement obtained, over an existing BP-based detection method, via computer simulations.
We consider nonparametric sequential hypothesis testing problem when the distribution under the null hypothesis is fully known but the alternate hypothesis corresponds to some other unknown distribution with some loose constraints. We propose a simple algorithm to address the problem. These problems are primarily motivated from wireless sensor networks and spectrum sensing in Cognitive Radios. A decentralized version utilizing spatial diversity is also proposed. Its performance is analysed and asymptotic properties are proved. The simulated and analysed performance of the algorithm is compared with an earlier algorithm addressing the same problem with similar assumptions. We also modify the algorithm for optimizing performance when information about the prior probabilities of occurrence of the two hypotheses are known.
A deep learning assisted sum-product detection algorithm (DL-SPDA) for faster-than-Nyquist (FTN) signaling is proposed in this paper. The proposed detection algorithm works on a modified factor graph which concatenates a neural network function node to the variable nodes of the conventional FTN factor graph to approach the maximum a posterior probabilities (MAP) error performance. In specific, the neural network performs as a function node in the modified factor graph to deal with the residual intersymbol interference (ISI) that is not considered by the conventional detector with a limited complexity. We modify the updating rule in the conventional sum-product algorithm so that the neural network assisted detector can be complemented to a Turbo equalization receiver. Furthermore, we propose a compatible training technique to improve the detection performance of the proposed DL-SPDA with Turbo equalization. In particular, the neural network is optimized in terms of the mutual information between the transmitted sequence and the extrinsic information. We also investigate the maximum-likelihood bit error rate (BER) performance of a finite length coded FTN system. Simulation results show that the error performance of the proposed algorithm approaches the MAP performance, which is consistent with the analytical BER.
A reliable, accurate, and affordable positioning service is highly required in wireless networks. In this paper, the novel Message Passing Hybrid Localization (MPHL) algorithm is proposed to solve the problem of cooperative distributed localization using distance and direction estimates. This hybrid approach combines two sensing modalities to reduce the uncertainty in localizing the network nodes. A statistical model is formulated for the problem, and approximate minimum mean square error (MMSE) estimates of the node locations are computed. The proposed MPHL is a distributed algorithm based on belief propagation (BP) and Markov chain Monte Carlo (MCMC) sampling. It improves the identifiability of the localization problem and reduces its sensitivity to the anchor node geometry, compared to distance-only or direction-only localization techniques. For example, the unknown location of a node can be found if it has only a single neighbor; and a whole network can be localized using only a single anchor node. Numerical results are presented showing that the average localization error is significantly reduced in almost every simulation scenario, about 50% in most cases, compared to the competing algorithms.
This paper introduces a framework for regression with dimensionally distributed data with a fusion center. A cooperative learning algorithm, the iterative conditional expectation algorithm (ICEA), is designed within this framework. The algorithm can effectively discover linear combinations of individual estimators trained by each agent without transferring and storing large amount of data amongst the agents and the fusion center. The convergence of ICEA is explored. Specifically, for a two agent system, each complete round of ICEA is guaranteed to be a non-expansive map on the function space of each agent. The advantages and limitations of ICEA are also discussed for data sets with various distributions and various hidden rules. Moreover, several techniques are also designed to leverage the algorithm to effectively learn more complex hidden rules that are not linearly decomposable.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا