Do you want to publish a course? Click here

Channel input adaptation via natural type selection

117   0   0.0 ( 0 )
 Added by Sergey Tridenski
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

We consider a channel-independent decoder which is for i.i.d. random codes what the maximum mutual-information decoder is for constant composition codes. We show that this decoder results in exactly the same i.i.d. random coding error exponent and almost the same correct-decoding exponent for a given codebook distribution as the maximum-likelihood decoder. We propose an algorithm for computation of the optimal correct-decoding exponent which operates on the corresponding expression for the channel-independent decoder. The proposed algorithm comes in t



rate research

Read More

The asymptotic capacity at low input powers of an average-power limited or an average- and peak-power limited discrete-time Poisson channel is considered. For a Poisson channel whose dark current is zero or decays to zero linearly with its average input power $E$, capacity scales like $Elogfrac{1}{E}$ for small $E$. For a Poisson channel whose dark current is a nonzero constant, capacity scales, to within a constant, like $Eloglogfrac{1}{E}$ for small $E$.
We study communication systems over band-limited Additive White Gaussian Noise (AWGN) channels in which the transmitter output is constrained to be symmetric binary (bi-polar). In this work we improve the original Ozarov-Wyner-Ziv (OWZ) lower bound on capacity by introducing a new achievability scheme with two advantages over the studied OWZ scheme which is based on peak-power constrained pulse-amplitude modulation. Our scheme achieves a moderately improved information rate and it does so with much less sign transitions of the binary signal. The gap between the known upper-bound based on spectral constrains of bi-polar signals and our achievable lower bound is reduced to 0.86 bits per Nyquist interval at high SNR.
In this paper, we study the problem of channel simulation via interactive communication, known as the coordination capacity, in a two-terminal network. We assume that two terminals observe i.i.d. copies of two random variables and would like to generate i.i.d. copies of two other random variables jointly distributed with the observed random variables. The terminals are provided with two-way communication links, and shared common randomness, all at limited rates. Two special cases of this problem are the interactive function computation studied by Ma and Ishwar, and the tradeoff curve between one-way communication and shared randomness studied by Cuff. The latter work had inspired Gohari and Anantharam to study the general problem of channel simulation via interactive communication stated above. However only inner and outer bounds for the special case of no shared randomness were obtained in their work. In this paper we settle this problem by providing an exact computable characterization of the multi-round problem. To show this we employ the technique of output statistics of random binning that has been recently developed by the authors.
The identification (ID) capacity region of the two-receiver broadcast channel (BC) is shown to be the set of rate-pairs for which, for some distribution on the channel input, each receivers ID rate does not exceed the mutual information between the channel input and the channel output that it observes. Moreover, the capacity regions interior is achieved by codes with deterministic encoders. The results are obtained under the average-error criterion, which requires that each receiver reliably identify its message whenever the message intended for the other receiver is drawn at random. They hold also for channels whose transmission capacity region is to-date unknown. Key to the proof is a new ID code construction for the single-user channel. Extensions to the BC with one-sided feedback and the three-receiver BC are also discussed: inner bounds on their ID capacity regions are obtained, and those are shown to be in some cases tight.
128 - Erdal Arikan 2009
A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels ${W_N^{(i)}:1le ile N}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels ${W_N^{(i)}}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes ${{mathscr C}_n;nge 1}$ such that ${mathscr C}_n$ has block-length $N=2^n$, rate $ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) le bigoh(N^{-frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(Nlog N)$ for each.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا