Do you want to publish a course? Click here

Approaching the Capacity of Large-Scale MIMO Systems via Non-Binary LDPC Codes

134   0   0.0 ( 0 )
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

In this paper, the application of non-binary low-density parity-check (NBLDPC) codes to MIMO systems which employ hundreds of antennas at both the transmitter and the receiver has been proposed. Together with the well-known low-complexity MMSE detection, the moderate length NBLDPC codes can operate closer to the MIMO capacity, e.g., capacity-gap about 3.5 dB (the best known gap is more than 7 dB). To further reduce the complexity of MMSE detection, a novel soft output detection that can provide an excellent coded performance in low SNR region with 99% complexity reduction is also proposed. The asymptotic performance is analysed by using the Monte Carlo density evolution. It is found that the NBLDPC codes can operate within 1.6 dB from the MIMO capacity. Furthermore, the merit of using the NBLDPC codes in large MIMO systems with the presence of imperfect channel estimation and spatial fading correlation which are both the realistic scenarios for large MIMO systems is also pointed out.



rate research

Read More

449 - Igal Sason 2015
This paper is focused on the derivation of some universal properties of capacity-approaching low-density parity-check (LDPC) code ensembles whose transmission takes place over memoryless binary-input output-symmetric (MBIOS) channels. Properties of the degree distributions, graphical complexity and the number of fundamental cycles in the bipartite graphs are considered via the derivation of information-theoretic bounds. These bounds are expressed in terms of the target block/ bit error probability and the gap (in rate) to capacity. Most of the bounds are general for any decoding algorithm, and some others are proved under belief propagation (BP) decoding. Proving these bounds under a certain decoding algorithm, validates them automatically also under any sub-optimal decoding algorithm. A proper modification of these bounds makes them universal for the set of all MBIOS channels which exhibit a given capacity. Bounds on the degree distributions and graphical complexity apply to finite-length LDPC codes and to the asymptotic case of an infinite block length. The bounds are compared with capacity-approaching LDPC code ensembles under BP decoding, and they are shown to be informative and are easy to calculate. Finally, some interesting open problems are considered.
Non-binary low-density parity-check codes are robust to various channel impairments. However, based on the existing decoding algorithms, the decoder implementations are expensive because of their excessive computational complexity and memory usage. Based on the combinatorial optimization, we present an approximation method for the check node processing. The simulation results demonstrate that our scheme has small performance loss over the additive white Gaussian noise channel and independent Rayleigh fading channel. Furthermore, the proposed reduced-complexity realization provides significant savings on hardware, so it yields a good performance-complexity tradeoff and can be efficiently implemented.
Motivated by recently derived fundamental limits on total (transmit + decoding) power for coded communication with VLSI decoders, this paper investigates the scaling behavior of the minimum total power needed to communicate over AWGN channels as the target bit-error-probability tends to zero. We focus on regular-LDPC codes and iterative message-passing decoders. We analyze scaling behavior under two VLSI complexity models of decoding. One model abstracts power consumed in processing elements (node model), and another abstracts power consumed in wires which connect the processing elements (wire model). We prove that a coding strategy using regular-LDPC codes with Gallager-B decoding achieves order-optimal scaling of total power under the node model. However, we also prove that regular-LDPC codes and iterative message-passing decoders cannot meet existing fundamental limits on total power under the wire model. Further, if the transmit energy-per-bit is bounded, total power grows at a rate that is worse than uncoded transmission. Complementing our theoretical results, we develop detailed physical models of decoding implementations using post-layout circuit simulations. Our theoretical and numerical results show that approaching fundamental limits on total power requires increasing the complexity of both the code design and the corresponding decoding algorithm as communication distance is increased or error-probability is lowered.
Spatially coupled codes have been shown to universally achieve the capacity for a large class of channels. Many variants of such codes have been introduced to date. We discuss a further such variant that is particularly simple and is determined by a very small number of parameters. More precisely, we consider time-invariant low-density convolutional codes with very large constraint lengths. We show via simulations that, despite their extreme simplicity, such codes still show the threshold saturation behavior known from the spatially coupled codes discussed in the literature. Further, we show how the size of the typical minimum stopping set is related to basic parameters of the code. Due to their simplicity and good performance, these codes might be attractive from an implementation perspective.
In this letter, we explore the performance limits of short polar codes and find that the maximum likelihood (ML) performance of a simple CRC-polar concatenated scheme can approach the finite blocklength capacity. Then, in order to approach the ML performance with a low average complexity, a CRC-aided hybrid decoding (CA-HD) algorithm is proposed and its decoding process is divided into two steps. In the first step, the received sequence is decoded by the adaptive successive cancellation list (ADSCL) decoding. In the second step, CRC-aided sphere decoding with a reasonable initial radius is used to decode the received sequence. To obtain the reasonable radius, the CRC bits of the survival paths in ADSCL are recalculated and the minimum Euclidean distance between the survival path and the received sequence is chosen as the initial radius. The simulation results show that CA-HD can achieve within about $0.025$dB of the finite blocklength capacity at the block error ratio $10^{-3}$ with code length $128$ and code rate $1/2$.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا