No Arabic abstract
We study the behavior of the belief-propagation (BP) algorithm affected by erroneous data exchange in a wireless sensor network (WSN). The WSN conducts a distributed binary hypothesis test where the joint statistical behavior of the sensor observations is modeled by a Markov random field whose parameters are used to build the BP messages exchanged between the sensing nodes. Through linearization of the BP message-update rule, we analyze the behavior of the resulting erroneous decision variables and derive closed-form relationships that describe the impact of stochastic errors on the performance of the BP algorithm. We then develop a decentralized distributed optimization framework to enhance the system performance by mitigating the impact of errors via a distributed linear data-fusion scheme. Finally, we compare the results of the proposed analysis with the existing works and visualize, via computer simulations, the performance gain obtained by the proposed optimization.
In this paper, we investigate distributed inference schemes, over binary-valued Markov random fields, which are realized by the belief propagation (BP) algorithm. We first show that a decision variable obtained by the BP algorithm in a network of distributed agents can be approximated by a linear fusion of all the local log-likelihood ratios. The proposed approach clarifies how the BP algorithm works, simplifies the statistical analysis of its behavior, and enables us to develop a performance optimization framework for the BP-based distributed inference systems. Next, we propose a blind learning-adaptation scheme to optimize the system performance when there is no information available a priori describing the statistical behavior of the wireless environment concerned. In addition, we propose a blind threshold adaptation method to guarantee a certain performance level in a BP-based distributed detection system. To clarify the points discussed, we design a novel linear-BP-based distributed spectrum sensing scheme for cognitive radio networks and illustrate the performance improvement obtained, over an existing BP-based detection method, via computer simulations.
We introduce a two-stage decimation process to improve the performance of neural belief propagation (NBP), recently introduced by Nachmani et al., for short low-density parity-check (LDPC) codes. In the first stage, we build a list by iterating between a conventional NBP decoder and guessing the least reliable bit. The second stage iterates between a conventional NBP decoder and learned decimation, where we use a neural network to decide the decimation value for each bit. For a (128,64) LDPC code, the proposed NBP with decimation outperforms NBP decoding by 0.75 dB and performs within 1 dB from maximum-likelihood decoding at a block error rate of $10^{-4}$.
We consider near maximum-likelihood (ML) decoding of short linear block codes. In particular, we propose a novel decoding approach based on neural belief propagation (NBP) decoding recently introduced by Nachmani et al. in which we allow a different parity-check matrix in each iteration of the algorithm. The key idea is to consider NBP decoding over an overcomplete parity-check matrix and use the weights of NBP as a measure of the importance of the check nodes (CNs) to decoding. The unimportant CNs are then pruned. In contrast to NBP, which performs decoding on a given fixed parity-check matrix, the proposed pruning-based neural belief propagation (PB-NBP) typically results in a different parity-check matrix in each iteration. For a given complexity in terms of CN evaluations, we show that PB-NBP yields significant performance improvements with respect to NBP. We apply the proposed decoder to the decoding of a Reed-Muller code, a short low-density parity-check (LDPC) code, and a polar code. PB-NBP outperforms NBP decoding over an overcomplete parity-check matrix by 0.27-0.31 dB while reducing the number of required CN evaluations by up to 97%. For the LDPC code, PB-NBP outperforms conventional belief propagation with the same number of CN evaluations by 0.52 dB. We further extend the pruning concept to offset min-sum decoding and introduce a pruning-based neural offset min-sum (PB-NOMS) decoder, for which we jointly optimize the offsets and the quantization of the messages and offsets. We demonstrate performance 0.5 dB from ML decoding with 5-bit quantization for the Reed-Muller code.
A low-density parity-check (LDPC) code is a linear block code described by a sparse parity-check matrix, which can be efficiently represented by a bipartite Tanner graph. The standard iterative decoding algorithm, known as belief propagation, passes messages along the edges of this Tanner graph. Density evolution is an efficient method to analyze the performance of the belief propagation decoding algorithm for a particular LDPC code ensemble, enabling the determination of a decoding threshold. The basic problem addressed in this work is how to optimize the Tanner graph so that the decoding threshold is as large as possible. We introduce a new code optimization technique which involves the search space range which can be thought of as minimizing randomness in differential evolution or limiting the search range in exhaustive search. This technique is applied to the design of good irregular LDPC codes and multiedge type LDPC codes.
We consider the problem of identifying a pattern of faults from a set of noisy linear measurements. Unfortunately, maximum a posteriori probability estimation of the fault pattern is computationally intractable. To solve the fault identification problem, we propose a non-parametric belief propagation approach. We show empirically that our belief propagation solver is more accurate than recent state-of-the-art algorithms including interior point methods and semidefinite programming. Our superior performance is explained by the fact that we take into account both the binary nature of the individual faults and the sparsity of the fault pattern arising from their rarity.