ترغب بنشر مسار تعليمي؟ اضغط هنا

Enhanced Precision Through Multiple Reads for LDPC Decoding in Flash Memories

219   0   0.0 ( 0 )
 نشر من قبل Kasra Vakilinia
 تاريخ النشر 2013
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Multiple reads of the same Flash memory cell with distinct word-line voltages provide enhanced precision for LDPC decoding. In this paper, the word-line voltages are optimized by maximizing the mutual information (MI) of the quantized channel. The enhanced precision from a few additional reads allows FER performance to approach that of full-precision soft information and enables an LDPC code to significantly outperform a BCH code. A constant-ratio constraint provides a significant simplification in the optimization with no noticeable loss in performance. For a well-designed LDPC code, the quantization that maximizes the mutual information also minimizes the frame error rate in our simulations. However, for an example LDPC code with a high error floor caused by small absorbing sets, the MMI quantization does not provide the lowest frame error rate. The best quantization in this case introduces more erasures than would be optimal for the channel MI in order to mitigate the absorbing sets of the poorly designed code. The paper also identifies a trade-off in LDPC code design when decoding is performed with multiple precision levels; the best code at one level of precision will typically not be the best code at a different level of precision.



قيم البحث

اقرأ أيضاً

This paper investigates the application of low-density parity-check (LDPC) codes to Flash memories. Multiple cell reads with distinct word-line voltages provide limited-precision soft information for the LDPC decoder. The values of the word-line volt ages (also called reference voltages) are optimized by maximizing the mutual information (MI) between the input and output of the multiple-read channel. Constraining the maximum mutual-information (MMI) quantization to enforce a constant-ratio constraint provides a significant simplification with no noticeable loss in performance. Our simulation results suggest that for a well-designed LDPC code, the quantization that maximizes the mutual information will also minimize the frame error rate. However, care must be taken to design the code to perform well in the quantized channel. An LDPC code designed for a full-precision Gaussian channel may perform poorly in the quantized setting. Our LDPC code designs provide an example where quantization increases the importance of absorbing sets thus changing how the LDPC code should be optimized. Simulation results show that small increases in precision enable the LDPC code to significantly outperform a BCH code with comparable rate and block length (but without the benefit of the soft information) over a range of frame error rates.
High-capacity NAND flash memories use multi-level cells (MLCs) to store multiple bits per cell and achieve high storage densities. Higher densities cause increased raw bit error rates (BERs), which demand powerful error correcting codes. Low-density parity-check (LDPC) codes are a well-known class of capacity-approaching codes in AWGN channels. However, LDPC codes traditionally use soft information while the flash read channel provides only hard information. Low resolution soft information may be obtained by performing multiple reads per cell with distinct word-line voltages. We select the values of these word-line voltages to maximize the mutual information between the input and output of the equivalent multiple-read channel under any specified noise model. Our results show that maximum mutual-information (MMI) quantization provides better soft information for LDPC decoding given the quantization level than the constant-pdf-ratio quantization approach. We also show that adjusting the LDPC code degree distribution for the quantized setting provides a significant performance improvement.
85 - Eshed Ram , Yuval Cassuto 2018
This paper presents a theoretical study of a new type of LDPC codes motivated by practical storage applications. LDPCL codes (suffix L represents locality) are LDPC codes that can be decoded either as usual over the full code block, or locally when a smaller sub-block is accessed (to reduce latency). LDPCL codes are designed to maximize the error-correction performance vs. rate in the usual (global) mode, while at the same time providing a certain performance in the local mode. We develop a theoretical framework for the design of LDPCL codes. Our results include a design tool to construct an LDPC code with two data-protection levels: local and global. We derive theoretical results supporting this tool and we show how to achieve capacity with it. A trade-off between the gap to capacity and the number of full-block accesses is studied, and a finite-length analysis of ML decoding is performed to exemplify a trade-off between the locality capability and the full-block error-correcting capability.
Decoding sequences that stem from multiple transmissions of a codeword over an insertion, deletion, and substitution channel is a critical component of efficient deoxyribonucleic acid (DNA) data storage systems. In this paper, we consider a concatena ted coding scheme with an outer low-density parity-check code and either an inner convolutional code or a block code. We propose two new decoding algorithms for inference from multiple received sequences, both combining the inner code and channel to a joint hidden Markov model to infer symbolwise a posteriori probabilities (APPs). The first decoder computes the exact APPs by jointly decoding the received sequences, whereas the second decoder approximates the APPs by combining the results of separately decoded received sequences. Using the proposed algorithms, we evaluate the performance of decoding multiple received sequences by means of achievable information rates and Monte-Carlo simulations. We show significant performance gains compared to a single received sequence.
Layered decoding is well appreciated in Low-Density Parity-Check (LDPC) decoder implementation since it can achieve effectively high decoding throughput with low computation complexity. This work, for the first time, addresses low complexity column-l ayered decoding schemes and VLSI architectures for multi-Gb/s applications. At first, the Min-Sum algorithm is incorporated into the column-layered decoding. Then algorithmic transformations and judicious approximations are explored to minimize the overall computation complexity. Compared to the original column-layered decoding, the new approach can reduce the computation complexity in check node processing for high-rate LDPC codes by up to 90% while maintaining the fast convergence speed of layered decoding. Furthermore, a relaxed pipelining scheme is presented to enable very high clock speed for VLSI implementation. Equipped with these new techniques, an efficient decoder architecture for quasi-cyclic LDPC codes is developed and implemented with 0.13um CMOS technology. It is shown that a decoding throughput of nearly 4 Gb/s at maximum of 10 iterations can be achieved for a (4096, 3584) LDPC code. Hence, this work has facilitated practical applications of column-layered decoding and particularly made it very attractive in high-speed, high-rate LDPC decoder implementation.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا