No Arabic abstract
We consider the problem of minimizing the number of broadcasts for collecting all sensor measurements at a sink node in a noisy broadcast sensor network. Focusing first on arbitrary network topologies, we provide (i) fundamental limits on the required number of broadcasts of data gathering, and (ii) a general in-network computing strategy to achieve an upper bound within factor $log N$ of the fundamental limits, where $N$ is the number of agents in the network. Next, focusing on two example networks, namely, textcolor{black}{arbitrary geometric networks and random Erd$ddot{o}$s-R$acute{e}$nyi networks}, we provide improved in-network computing schemes that are optimal in that they attain the fundamental limits, i.e., the lower and upper bounds are tight textcolor{black}{in order sense}. Our main techniques are three distributed encoding techniques, called graph codes, which are designed respectively for the above-mentioned three scenarios. Our work thus extends and unifies previous works such as those of Gallager [1] and Karamchandani~emph{et. al.} [2] on number of broadcasts for distributed function computation in special network topologies, while bringing in novel techniques, e.g., from error-control coding and noisy circuits, for both upper and lower bounds.
Polar codes are introduced for discrete memoryless broadcast channels. For $m$-user deterministic broadcast channels, polarization is applied to map uniformly random message bits from $m$ independent messages to one codeword while satisfying broadcast constraints. The polarization-based codes achieve rates on the boundary of the private-message capacity region. For two-user noisy broadcast channels, polar implementations are presented for two information-theoretic schemes: i) Covers superposition codes; ii) Martons codes. Due to the structure of polarization, constraints on the auxiliary and channel-input distributions are identified to ensure proper alignment of polarization indices in the multi-user setting. The codes achieve rates on the capacity boundary of a few classes of broadcast channels (e.g., binary-input stochastically degraded). The complexity of encoding and decoding is $O(n*log n)$ where $n$ is the block length. In addition, polar code sequences obtain a stretched-exponential decay of $O(2^{-n^{beta}})$ of the average block error probability where $0 < beta < 0.5$.
Random linear network codes can be designed and implemented in a distributed manner, with low computational complexity. However, these codes are classically implemented over finite fields whose size depends on some global network parameters (size of the network, the number of sinks) that may not be known prior to code design. Also, if new nodes join the entire network code may have to be redesigned. In this work, we present the first universal and robust distributed linear network coding schemes. Our schemes are universal since they are independent of all network parameters. They are robust since if nodes join or leave, the remaining nodes do not need to change their coding operations and the receivers can still decode. They are distributed since nodes need only have topological information about the part of the network upstream of them, which can be naturally streamed as part of the communication protocol. We present both probabilistic and deterministic schemes that are all asymptotically rate-optimal in the coding block-length, and have guarantees of correctness. Our probabilistic designs are computationally efficient, with order-optimal complexity. Our deterministic designs guarantee zero error decoding, albeit via codes with high computational complexity in general. Our coding schemes are based on network codes over ``scalable fields. Instead of choosing coding coefficients from one field at every node, each node uses linear coding operations over an ``effective field-size that depends on the nodes distance from the source node. The analysis of our schemes requires technical tools that may be of independent interest. In particular, we generalize the Schwartz-Zippel lemma by proving a non-uniform version, wherein variables are chosen from sets of possibly different sizes. We also provide a novel robust distributed algorithm to assign unique IDs to network nodes.
Short message noisy network coding (SNNC) differs from long message noisy network coding (LNNC) in that one transmits many short messages in blocks rather than using one long message with repetitive encoding. Several properties of SNNC are developed. First, SNNC with backward decoding achieves the same rates as SNNC with offset encoding and sliding window decoding for memoryless networks where each node transmits a multicast message. The rates are the same as LNNC with joint decoding. Second, SNNC enables early decoding if the channel quality happens to be good. This leads to mixed strategies that unify the advantages of decode-forward and noisy network coding. Third, the best decoders sometimes treat other nodes signals as noise and an iterative method is given to find the set of nodes that a given node should treat as noise sources.
This chapter deals with the topic of designing reliable and efficient codes for the storage and retrieval of large quantities of data over storage devices that are prone to failure. For long, the traditional objective has been one of ensuring reliability against data loss while minimizing storage overhead. More recently, a third concern has surfaced, namely of the need to efficiently recover from the failure of a single storage unit, corresponding to recovery from the erasure of a single code symbol. We explain here, how coding theory has evolved to tackle this fresh challenge.
A Viterbi-like decoding algorithm is proposed in this paper for generalized convolutional network error correction coding. Different from classical Viterbi algorithm, our decoding algorithm is based on minimum error weight rather than the shortest Hamming distance between received and sent sequences. Network errors may disperse or neutralize due to network transmission and convolutional network coding. Therefore, classical decoding algorithm cannot be employed any more. Source decoding was proposed by multiplying the inverse of network transmission matrix, where the inverse is hard to compute. Starting from the Maximum A Posteriori (MAP) decoding criterion, we find that it is equivalent to the minimum error weight under our model. Inspired by Viterbi algorithm, we propose a Viterbi-like decoding algorithm based on minimum error weight of combined error vectors, which can be carried out directly at sink nodes and can correct any network errors within the capability of convolutional network error correction codes (CNECC). Under certain situations, the proposed algorithm can realize the distributed decoding of CNECC.