Do you want to publish a course? Click here

Analysis and Design of Partially Information- and Partially Parity-Coupled Turbo Codes

123   0   0.0 ( 0 )
 Added by Min Qiu
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we study a class of spatially coupled turbo codes, namely partially information- and partially parity-coupled turbo codes. This class of codes enjoy several advantages such as flexible code rate adjustment by varying the coupling ratio and the encoding and decoding architectures of the underlying component codes can remain unchanged. For this work, we first provide the construction methods for partially coupled turbo codes with coupling memory $m$ and study the corresponding graph models. We then derive the density evolution equations for the corresponding ensembles on the binary erasure channel to precisely compute their iterative decoding thresholds. Rate-compatible designs and their decoding thresholds are also provided, where the coupling and puncturing ratios are jointly optimized to achieve the largest decoding threshold for a given target code rate. Our results show that for a wide range of code rates, the proposed codes attain close-to-capacity performance and the decoding performance improves with increasing the coupling memory. In particular, the proposed partially parity-coupled turbo codes have thresholds within 0.0002 of the BEC capacity for rates ranging from $1/3$ to $9/10$, yielding an attractive way for constructing rate-compatible capacity-approaching channel codes.



rate research

Read More

Partially information coupled turbo codes (PIC-TCs) is a class of spatially coupled turbo codes that can approach the BEC capacity while keeping the encoding and decoding architectures of the underlying component codes unchanged. However, PIC-TCs have significant rate loss compared to its component rate-1/3 turbo code, and the rate loss increases with the coupling ratio. To absorb the rate loss, in this paper, we propose the partially information coupled duo-binary turbo codes (PIC-dTCs). Given a rate-1/3 turbo code as the benchmark, we construct a duo-binary turbo code by introducing one extra input to the benchmark code. Then, parts of the information sequence from the original input are coupled to the extra input of the succeeding code blocks. By looking into the graph model of PIC-dTC ensembles, we derive the exact density evolution equations of the PIC-dTC ensembles, and compute their belief propagation decoding thresholds on the binary erasure channel. Simulation results verify the correctness of our theoretical analysis, and also show significant error performance improvement over the uncoupled rate-1/3 turbo codes and existing designs of spatially coupled turbo codes.
Spatially coupled serially concatenated codes (SC-SCCs) are a class of spatially coupled turbo-like codes, which have a close-to-capacity performance and low error floor. In this paper we investigate the impact of coupling memory, block length, decoding window size, and number of iterations on the performance, complexity, and latency of SC-SCCs. Several design tradeoffs are presented to see the relation between these parameters in a wide range. Also, our analysis provides design guidelines for SC-SCCs in different scenarios to make the code design independent of block length. As a result, block length and coupling memory can be exchanged flexibly without changing the latency and complexity. Also, we observe that the performance of SC-SCCs is improved with respect to the uncoupled ensembles for a fixed latency and complexity.
The performance of short polar codes under successive cancellation (SC) and SC list (SCL) decoding is analyzed for the case where the decoder messages are coarsely quantized. This setting is of particular interest for applications requiring low-complexity energy-efficient transceivers (e.g., internet-of-things or wireless sensor networks). We focus on the extreme case where the decoder messages are quantized with 3 levels. We show how under SCL decoding quantized log-likelihood ratios lead to a large inaccuracy in the calculation of path metrics, resulting in considerable performance losses with respect to an unquantized SCL decoder. We then introduce two novel techniques which improve the performance of SCL decoding with coarse quantization. The first technique consists of a modification of the final decision step of SCL decoding, where the selected codeword is the one maximizing the maximum-likelihood decoding metric within the final list. The second technique relies on statistical knowledge about the reliability of the bit estimates, obtained through a suitably modified density evolution analysis, to improve the list construction phase, yielding a higher probability of having the transmitted codeword in the list. The effectiveness of the two techniques is demonstrated through simulations.
This paper considers a multi-antenna multicast system with programmable metasurface (PMS) based transmitter. Taking into account of the finite-resolution phase shifts of PMSs, a novel beam training approach is proposed, which achieves comparable performance as the exhaustive beam searching method but with much lower time overhead. Then, a closed-form expression for the achievable multicast rate is presented, which is valid for arbitrary system configurations. In addition, for certain asymptotic scenario, simple approximated expressions for the multicase rate are derived. Closed-form solutions are obtained for the optimal power allocation scheme, and it is shown that equal power allocation is optimal when the pilot power or the number of reflecting elements is sufficiently large. However, it is desirable to allocate more power to weaker users when there are a large number of RF chains. The analytical findings indicate that, with large pilot power, the multicast rate is determined by the weakest user. Also, increasing the number of radio frequency (RF) chains or reflecting elements can significantly improve the multicast rate, and as the phase shift number becomes larger, the multicast rate improves first and gradually converges to a limit. Moreover, increasing the number of users would significantly degrade the multicast rate, but this rate loss can be compensated by implementing a large number of reflecting elements.
176 - Maya Slamovich , Ram Zamir 2021
Non-orthogonal multiple-access (NOMA) is a leading technology which gain a lot of interest this past several years. It enables larger user density and therefore is suited for modern systems such as 5G and IoT. In this paper we examined different frame-based codes for a partially active NOMA system. It is a more realistic setting where only part of the users, in an overly populated system, are active simultaneously. We introduce a new analysis approach were the active user ratio, a systems feature, is kept constant and different sized frames are employed. The frame types were partially derived from previous papers on the subject [1][2] and partially novel such as the LPF and the Steiner ETF. We learned the best capacity achieving frame depends on the active user ratio and three distinct ranges where defined. In addition, we introduced a measure called practical capacity which is the maximal rate achieved by simple coding scheme. ETFs always achieve the best practical capacity while LPFs and sparse frames are worse than a random one.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا