Do you want to publish a course? Click here

On the Performance Analysis of Streaming Codes over the Gilbert-Elliott Channel

219   0   0.0 ( 0 )
 Added by Myna Vajha
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

The Gilbert-Elliot (GE) channel is a commonly-accepted model for packet erasures in networks. Streaming codes are a class of packet-level erasure codes designed to provide reliable communication over the GE channel. The design of a streaming code may be viewed as a two-step process. In the first, a more tractable, delay-constrained sliding window (DCSW) channel model is considered as a proxy to the GE channel. The streaming code is then designed to reliably recover from all erasures introduced by the DCSW channel model. Simulation is typically used to evaluate the performance of the streaming code over the original GE channel, as analytic performance evaluation is challenging. In the present paper, we take an important first step towards analytical performance evaluation. Recognizing that most, efficient constructions of a streaming code are based on the diagonal embedding or horizontal embedding of scalar block codes within a packet stream, this paper provides upper and lower bounds on the block-erasure probability of the underlying scalar block code when operated over the GE channel.



rate research

Read More

Applications where multiple users communicate with a common server and desire low latency are common and increasing. This paper studies a network with two source nodes, one relay node and a destination node, where each source nodes wishes to transmit a sequence of messages, through the relay, to the destination, who is required to decode the messages with a strict delay constraint $T$. The network with a single source node has been studied in cite{Silas2019}. We start by introducing two important tools: the delay spectrum, which generalizes delay-constrained point-to-point transmission, and concatenation, which, similar to time sharing, allows combinations of different codes in order to achieve a desired regime of operation. Using these tools, we are able to generalize the two schemes previously presented in cite{Silas2019}, and propose a novel scheme which allows us to achieve optimal rates under a set of well-defined conditions. Such novel scheme is further optimized in order to improve the achievable rates in the scenarios where the conditions for optimality are not met.
In this paper, we study systematic Luby Transform (SLT) codes over additive white Gaussian noise (AWGN) channel. We introduce the encoding scheme of SLT codes and give the bipartite graph for iterative belief propagation (BP) decoding algorithm. Similar to low-density parity-check codes, Gaussian approximation (GA) is applied to yield asymptotic performance of SLT codes. Recent work about SLT codes has been focused on providing better encoding and decoding algorithms and design of degree distributions. In our work, we propose a novel linear programming method to optimize the degree distribution. Simulation results show that the proposed distributions can provide better bit-error-ratio (BER) performance. Moreover, we analyze the lower bound of SLT codes and offer closed form expressions.
This paper studies low-latency streaming codes for the multi-hop network. The source is transmitting a sequence of messages (streaming messages) to a destination through a chain of relays where each hop is subject to packet erasures. Every source message has to be recovered perfectly at the destination within a delay constraint of $T$ time slots. In any sliding window of $T+1$ time slots, we assume no more than $N_j$ erasures introduced by the $j$th hop channel. The capacity in case of a single relay (a three-node network) was derived by Fong [1], et al. While the converse derived for the three-node case can be extended to any number of nodes using a similar technique (analyzing the case where erasures on other links are consecutive), we demonstrate next that the achievable scheme, which suggested a clever symbol-wise decode and forward strategy, can not be straightforwardly extended without a loss in performance. The coding scheme for the three-node network, which was shown to achieve the upper bound, was ``state-independent (i.e., it does not depend on specific erasure pattern). While this is a very desirable property, in this paper, we suggest a ``state-dependent (i.e., a scheme which depends on specific erasure pattern) and show that it achieves the upper bound up to the size of an additional header. Since, as we show, the size of the header does not depend on the field size, the gap between the achievable rate and the upper bound decreases as the field size increases.
We study the performance of low-density parity-check (LDPC) codes over finite integer rings, over two channels that arise from the Lee metric. The first channel is a discrete memory-less channel (DMC) matched to the Lee metric. The second channel adds to each codeword an error vector of constant Lee weight, where the error vector is picked uniformly at random from the set of vectors of constant Lee weight. It is shown that the marginal conditional distribution of the two channels coincides, in the limit of large blocklengths. The performance of selected LDPC code ensembles is analyzed by means of density evolution and finite-length simulations, with belief propagation decoding and with a low-complexity symbol message passing algorithm.
A lower bound on the maximum likelihood (ML) decoding error exponent of linear block code ensembles, on the erasure channel, is developed. The lower bound turns to be positive, over an ensemble specific interval of erasure probabilities, when the ensemble weight spectral shape function tends to a negative value as the fractional codeword weight tends to zero. For these ensembles we can therefore lower bound the block-wise ML decoding threshold. Two examples are presented, namely, linear random parity-check codes and fixed-rate Raptor codes with linear random precoders. While for the former a full analytical solution is possible, for the latter we can lower bound the ML decoding threshold on the erasure channel by simply solving a 2 x 2 system of nonlinear equations.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا