No Arabic abstract
In this paper, we propose an efficient coding scheme for the binary Chief Executive Officer (CEO) problem under logarithmic loss criterion. Courtade and Weissman obtained the exact rate-distortion bound for a two-link binary CEO problem under this criterion. We find the optimal test-channel model and its parameters for the encoder of each link by using the given bound. Furthermore, an efficient encoding scheme based on compound LDGM-LDPC codes is presented to achieve the theoretical rates. In the proposed encoding scheme, a binary quantizer using LDGM codes and a syndrome-decoding employing LDPC codes are applied. An iterative decoding is also presented as a fusion center to reconstruct the observation bits. The proposed decoder consists of a sum-product algorithm with a side information from other decoder and a soft estimator. The output of the CEO decoder is the probability of source bits conditional to the received sequences of both links. This method outperforms the majority-based estimation of the source bits utilized in the prior studies of the binary CEO problem. Our numerical examples verify a close performance of the proposed coding scheme to the theoretical bound in several cases.
The $L$-link binary Chief Executive Officer (CEO) problem under logarithmic loss is investigated in this paper. A quantization splitting technique is applied to convert the problem under consideration to a $(2L-1)$-step successive Wyner-Ziv (WZ) problem, for which a practical coding scheme is proposed. In the proposed scheme, low-density generator-matrix (LDGM) codes are used for binary quantization while low-density parity-check (LDPC) codes are used for syndrome generation; the decoder performs successive decoding based on the received syndromes and produces a soft reconstruction of the remote source. The simulation results indicate that the rate-distortion performance of the proposed scheme can approach the theoretical inner bound based on binary-symmetric test-channel models.
An $l$-link binary CEO problem is considered in this paper. We present a practical encoding and decoding scheme for this problem employing the graph-based codes. A successive coding scheme is proposed for converting an $l$-link binary CEO problem to the $(2l-1)$ single binary Wyner-Ziv (WZ) problems. By using the compound LDGM-LDPC codes, the theoretical bound of each binary WZ is asymptotically achievable. Our proposed decoder successively decodes the received data by employing the well-known Sum-Product (SP) algorithm and leverages them to reconstruct the source. The sum-rate distortion performance of our proposed coding scheme is compared with the theoretical bounds under the logarithmic loss (log-loss) criterion.
In this paper, we propose an efficient coding scheme for the two-link binary Chief Executive Officer (CEO) problem under logarithmic loss criterion. The exact rate-distortion bound for a two-link binary CEO problem under the logarithmic loss has been obtained by Courtade and Weissman. We propose an encoding scheme based on compound LDGM-LDPC codes to achieve the theoretical bounds. In the proposed encoding, a binary quantizer using LDGM codes and a syndrome-coding employing LDPC codes are applied. An iterative joint decoding is also designed as a fusion center. The proposed CEO decoder is based on the sum-product algorithm and a soft estimator.
This paper studies the joint design of optimal convolutional codes (CCs) and CRC codes when serial list Viterbi algorithm (S-LVA) is employed in order to achieve the target frame error rate (FER). We first analyze the S-LVA performance with respect to SNR and list size, repsectively, and prove the convergence of the expected number of decoding attempts when SNR goes to the extreme. We then propose the coded channel capacity as the criterion to jointly design optimal CC-CRC pair and optimal list size and show that the optimal list size of S-LVA is always the cardinality of all possible CCs. With the maximum list size, we choose the design metric of optimal CC-CRC pair as the SNR gap to random coding union (RCU) bound and the optimal CC-CRC pair is the one that achieves a target SNR gap with the least complexity. Finally, we show that a weaker CC with a strong optimal CRC code could be as powerful as a strong CC with no CRC code.
A loss function measures the discrepancy between the true values and their estimated fits, for a given instance of data. In classification problems, a loss function is said to be proper if a minimizer of the expected loss is the true underlying probability. We show that for binary classification, the divergence associated with smooth, proper, and convex loss functions is upper bounded by the Kullback-Leibler (KL) divergence, to within a normalization constant. This implies that by minimizing the logarithmic loss associated with the KL divergence, we minimize an upper bound to any choice of loss from this set. As such the logarithmic loss is universal in the sense of providing performance guarantees with respect to a broad class of accuracy measures. Importantly, this notion of universality is not problem-specific, enabling its use in diverse applications, including predictive modeling, data clustering and sample complexity analysis. Generalizations to arbitrary finite alphabets are also developed. The derived inequalities extend several well-known $f$-divergence results.