Do you want to publish a course? Click here

Decoding Error Probability of the Random Matrix Ensemble over the Erasure Channel

127   0   0.0 ( 0 )
 Added by Chin Hei Chan
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

Using tools developed in a recent work by Shen and the second author, in this paper we carry out an in-depth study on the average decoding error probability of the random matrix ensemble over the erasure channel under three decoding principles, namely unambiguous decoding, maximum likelihood decoding and list decoding. We obtain explicit formulas for the average decoding error probabilities of the random matrix ensemble under these three decoding principles and compute the error exponents. Moreover, for unambiguous decoding, we compute the variance of the decoding error probability of the random matrix ensemble and the error exponent of the variance, which imply a strong concentration result, that is, roughly speaking, the ratio of the decoding error probability of a random code in the ensemble and the average decoding error probability of the ensemble converges to 1 with high probability when the code length goes to infinity.



rate research

Read More

A lower bound on the maximum likelihood (ML) decoding error exponent of linear block code ensembles, on the erasure channel, is developed. The lower bound turns to be positive, over an ensemble specific interval of erasure probabilities, when the ensemble weight spectral shape function tends to a negative value as the fractional codeword weight tends to zero. For these ensembles we can therefore lower bound the block-wise ML decoding threshold. Two examples are presented, namely, linear random parity-check codes and fixed-rate Raptor codes with linear random precoders. While for the former a full analytical solution is possible, for the latter we can lower bound the ML decoding threshold on the erasure channel by simply solving a 2 x 2 system of nonlinear equations.
Applications where multiple users communicate with a common server and desire low latency are common and increasing. This paper studies a network with two source nodes, one relay node and a destination node, where each source nodes wishes to transmit a sequence of messages, through the relay, to the destination, who is required to decode the messages with a strict delay constraint $T$. The network with a single source node has been studied in cite{Silas2019}. We start by introducing two important tools: the delay spectrum, which generalizes delay-constrained point-to-point transmission, and concatenation, which, similar to time sharing, allows combinations of different codes in order to achieve a desired regime of operation. Using these tools, we are able to generalize the two schemes previously presented in cite{Silas2019}, and propose a novel scheme which allows us to achieve optimal rates under a set of well-defined conditions. Such novel scheme is further optimized in order to improve the achievable rates in the scenarios where the conditions for optimality are not met.
We show that Reed-Muller codes achieve capacity under maximum a posteriori bit decoding for transmission over the binary erasure channel for all rates $0 < R < 1$. The proof is generic and applies to other codes with sufficient amount of symmetry as well. The main idea is to combine the following observations: (i) monotone functions experience a sharp threshold behavior, (ii) the extrinsic information transfer (EXIT) functions are monotone, (iii) Reed--Muller codes are 2-transitive and thus the EXIT functions associated with their codeword bits are all equal, and (iv) therefore the Area Theorem for the average EXIT functions implies that RM codes threshold is at channel capacity.
Product codes (PCs) and staircase codes (SCCs) are conventionally decoded based on bounded distance decoding (BDD) of the component codes and iterating between row and column decoders. The performance of iterative BDD (iBDD) can be improved using soft-aided (hybrid) algorithms. Among these, iBDD with combined reliability (iBDD-CR) has been recently proposed for PCs, yielding sizeable performance gains at the expense of a minor increase in complexity compared to iBDD. In this paper, we first extend iBDD-CR to SCCs. We then propose two novel decoding algorithms for PCs and SCCs which improve upon iBDD-CR. The new algorithms use an extra decoding attempt based on error and erasure decoding of the component codes. The proposed algorithms require only the exchange of hard messages between component decoders, making them an attractive solution for ultra high-throughput fiber-optic systems. Simulation results show that our algorithms based on two decoding attempts achieve gains of up to $0.88$ dB for both PCs and SCCs. This corresponds to a $33%$ optical reach enhancement over iBDD with bit-interleaved coded modulation using $256$ quadrature amplitude modulation.
We consider the problem of determining the zero-error list-decoding capacity of the $q/(q-1)$ channel studied by Elias (1988). The $q/(q-1)$ channel has input and output alphabet consisting of $q$ symbols, say, $Q = {x_1,x_2,ldots, x_q}$; when the channel receives an input $x in Q$, it outputs a symbol other than $x$ itself. Let $n(m,q,ell)$ be the smallest $n$ for which there is a code $C subseteq Q^n$ of $m$ elements such that for every list $w_1, w_2, ldots, w_{ell+1}$ of distinct code-words from $C$, there is a coordinate $j in [n]$ that satisfies ${w_1[j], w_2[j], ldots, w_{ell+1}[j]} = Q$. We show that for $epsilon<1/6$, for all large $q$ and large enough $m$, $n(m,q, epsilon qln{q}) geq Omega(exp{(q^{1-6epsilon}/8)}log_2{m})$. The lower bound obtained by Fredman and Koml{o}s (1984) for perfect hashing implies that $n(m,q,q-1) = exp(Omega(q)) log_2 m$; similarly, the lower bound obtained by K{o}rner (1986) for nearly-perfect hashing implies that $n(m,q,q) = exp(Omega(q)) log_2 m$. These results show that the zero-error list-decoding capacity of the $q/(q-1)$ channel with lists of size at most $q$ is exponentially small. Extending these bounds, Chakraborty et al. (2006) showed that the capacity remains exponentially small even if the list size is allowed to be as large as $1.58q$. Our result implies that the zero-error list-decoding capacity of the $q/(q-1)$ channel with list size $epsilon q$ for $epsilon<1/6$ is $exp{(Omega(q^{1-6epsilon}))}$. This resolves the conjecture raised by Chakraborty et al. (2006) about the zero-error list-decoding capcity of the $q/(q-1)$ channel at larger list sizes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا