ترغب بنشر مسار تعليمي؟ اضغط هنا

Low Complexity Two-Stage Soft/Hard Decoders

55   0   0.0 ( 0 )
 نشر من قبل Farbod Kayhan
 تاريخ النشر 2017
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Next generation wireless systems will need higher spectral efficiency as the expected traffic volumes per unit bandwidth and dimension will inevitably grow. As a consequence, it is necessary to design coding schemes with performances close to the theoretical limits, having high flexibility and low complexity requirements at transmitter and receiver. In this paper, we point out some of the limitations of the Bit Interleaved Code Modulation (BICM) technique which is the state of the art adopted in several standards and then propose some new lower complexity alternatives. These low complexity alternatives are obtained by applying the recently introduced Analog Digital Belief Propagation (ADBP) algorithm to a two stage encoding scheme embedding a hard decoding stage. First we show that for PAM$^2$ type constellations over the AWGN channel, the performance loss caused by using a hard decoded stage for all modulation bits except the two least protected is negligible. Next, we consider the application of two stage decoders to more challenging Rician channels, showing that in this case the number of bits needed to be soft decoded depends on the Rician factor and increases to a maximum of three bits per dimension for the Rayleigh channel. Finally, we apply the ADBP algorithm to further reduce the detection and decoding complexity.

قيم البحث

اقرأ أيضاً

129 - Robert Graczyk , Igal Sason 2021
Stationary memoryless sources produce two correlated random sequences $X^n$ and $Y^n$. A guesser seeks to recover $X^n$ in two stages, by first guessing $Y^n$ and then $X^n$. The contributions of this work are twofold: (1) We characterize the least a chievable exponential growth rate (in $n$) of any positive $rho$-th moment of the total number of guesses when $Y^n$ is obtained by applying a deterministic function $f$ component-wise to $X^n$. We prove that, depending on $f$, the least exponential growth rate in the two-stage setup is lower than when guessing $X^n$ directly. We further propose a simple Huffman code-based construction of a function $f$ that is a viable candidate for the minimization of the least exponential growth rate in the two-stage guessing setup. (2) We characterize the least achievable exponential growth rate of the $rho$-th moment of the total number of guesses required to recover $X^n$ when Stage 1 need not end with a correct guess of $Y^n$ and without assumptions on the stationary memoryless sources producing $X^n$ and $Y^n$.
The performance of millimeter wave (mmWave) multiple-input multiple-output (MIMO) systems is limited by the sparse nature of propagation channels and the restricted number of radio frequency (RF) chains at transceivers. The introduction of reconfigur able antennas offers an additional degree of freedom on designing mmWave MIMO systems. This paper provides a theoretical framework for studying the mmWave MIMO with reconfigurable antennas. Based on the virtual channel model, we present an architecture of reconfigurable mmWave MIMO with beamspace hybrid analog-digital beamformers and reconfigurable antennas at both the transmitter and the receiver. We show that employing reconfigurable antennas can provide throughput gain for the mmWave MIMO. We derive the expression for the average throughput gain of using reconfigurable antennas in the system, and further derive the expression for the outage throughput gain for the scenarios where the channels are (quasi) static. Moreover, we propose a low-complexity algorithm for reconfiguration state selection and beam selection. Our numerical results verify the derived expressions for the throughput gains and demonstrate the near-optimal throughput performance of the proposed low-complexity algorithm.
127 - Farbod Kayhan 2017
Despite of the known gap from the Shannons capacity, several standards are still employing QAM or star shape constellations, mainly due to the existing low complexity detectors. In this paper, we investigate the low complexity detection for a family of QAM isomorphic constellations. These constellations are known to perform very close to the peak-power limited capacity, outperforming the DVB-S2X standard constellations. The proposed strategy is to first remap the received signals to the QAM constellation using the existing isomorphism and then break the log likelihood ratio computations to two one dimensional PAM constellations. Gains larger than 0.6 dB with respect to QAM can be obtained over the peak power limited channels without any increase in detection complexity. Our scheme also provides a systematic way to design constellations with low complexity one dimensional detectors. Several open problems are discussed at the end of the paper.
77 - S. Li , A. Mirani , M. Karlsson 2021
Voronoi constellations (VCs) are finite sets of vectors of a coding lattice enclosed by the translated Voronoi region of a shaping lattice, which is a sublattice of the coding lattice. In conventional VCs, the shaping lattice is a scaled-up version o f the coding lattice. In this paper, we design low-complexity VCs with a cubic coding lattice of up to 32 dimensions, in which pseudo-Gray labeling is applied to minimize the bit error rate. The designed VCs have considerable shaping gains of up to 1.03 dB and finer choices of spectral efficiencies in practice. A mutual information estimation method and a log-likelihood approximation method based on importance sampling for very large constellations are proposed and applied to the designed VCs. With error-control coding, the proposed VCs can have higher achievable information rates than the conventional scaled VCs because of their inherently good pseudo-Gray labeling feature, with a lower decoding complexity.
We introduce a two-stage decimation process to improve the performance of neural belief propagation (NBP), recently introduced by Nachmani et al., for short low-density parity-check (LDPC) codes. In the first stage, we build a list by iterating betwe en a conventional NBP decoder and guessing the least reliable bit. The second stage iterates between a conventional NBP decoder and learned decimation, where we use a neural network to decide the decimation value for each bit. For a (128,64) LDPC code, the proposed NBP with decimation outperforms NBP decoding by 0.75 dB and performs within 1 dB from maximum-likelihood decoding at a block error rate of $10^{-4}$.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا