ترغب بنشر مسار تعليمي؟ اضغط هنا

Joint Crosstalk-Avoidance and Error-Correction Coding for Parallel Data Buses

102   0   0.0 ( 0 )
 نشر من قبل Urs Niesen
 تاريخ النشر 2016
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Decreasing transistor sizes and lower voltage swings cause two distinct problems for communication in integrated circuits. First, decreasing inter-wire spacing increases interline capacitive coupling, which adversely affects transmission energy and delay. Second, lower voltage swings render the transmission susceptible to various noise sources. Coding can be used to address both these problems. So-called crosstalk-avoidance codes mitigate capacitive coupling, and traditional error-correction codes introduce resilience against channel errors. Unfortunately, crosstalk-avoidance and error-correction codes cannot be combined in a straightforward manner. On the one hand, crosstalk-avoidance encoding followed by error-correction encoding destroys the crosstalk-avoidance property. On the other hand, error-correction encoding followed by crosstalk-avoidance encoding causes the crosstalk-avoidance decoder to fail in the presence of errors. Existing approaches circumvent this difficulty by using additional bus wires to protect the parities generated from the output of the error-correction encoder, and are therefore inefficient. In this work we propose a novel joint crosstalk-avoidance and error-correction coding and decoding scheme that provides higher bus transmission rates compared to existing approaches. Our joint approach carefully embeds the parities such that the crosstalk-avoidance property is preserved. We analyze the rate and minimum distance of the proposed scheme. We also provide a density evolution analysis and predict iterative decoding thresholds for reliable communication under random bus erasures. This density evolution analysis is nonstandard, since the crosstalk-avoidance constraints are inherently nonlinear.



قيم البحث

اقرأ أيضاً

We consider linear network error correction (LNEC) coding when errors may occur on edges of a communication network of which the topology is known. In this paper, we first revisit and explore the framework of LNEC coding, and then unify two well-know n LNEC coding approaches. Furthermore, by developing a graph-theoretic approach to the framework of LNEC coding, we obtain a significantly enhanced characterization of the error correction capability of LNEC codes in terms of the minimum distances at the sink nodes. In LNEC coding, the minimum required field size for the existence of LNEC codes, in particular LNEC maximum distance separable (MDS) codes which are a type of most important optimal codes, is an open problem not only of theoretical interest but also of practical importance, because it is closely related to the implementation of the coding scheme in terms of computational complexity and storage requirement. By applying the graph-theoretic approach, we obtain an improved upper bound on the minimum required field size. The improvement over the existing results is in general significant. The improved upper bound, which is graph-theoretic, depends only on the network topology and requirement of the error correction capability but not on a specific code construction. However, this bound is not given in an explicit form. We thus develop an efficient algorithm that can compute the bound in linear time. In developing the upper bound and the efficient algorithm for computing this bound, various graph-theoretic concepts are introduced. These concepts appear to be of fundamental interest in graph theory and they may have further applications in graph theory and beyond.
Index coding is a source coding problem in which a broadcaster seeks to meet the different demands of several users, each of whom is assumed to have some prior information on the data held by the sender. If the sender knows its clients requests and t heir side-information sets, then the number of packet transmissions required to satisfy all users demands can be greatly reduced if the data is encoded before sending. The collection of side-information indices as well as the indices of the requested data is described as an instance of the index coding with side-information (ICSI) problem. The encoding function is called the index code of the instance, and the number of transmissions employed by the code is referred to as its length. The main ICSI problem is to determine the optimal length of an index code for and instance. As this number is hard to compute, bounds approximating it are sought, as are algorithms to compute efficient index codes. Two interesting generalizations of the problem that have appeared in the literature are the subject of this work. The first of these is the case of index coding with coded side information, in which linear combinations of the source data are both requested by and held as users side-information. The second is the introduction of error-correction in the problem, in which the broadcast channel is subject to noise. In this paper we characterize the optimal length of a scalar or vector linear index code with coded side information (ICCSI) over a finite field in terms of a generalized min-rank and give bounds on this number based on constructions of random codes for an arbitrary instance. We furthermore consider the length of an optimal error correcting code for an instance of the ICCSI problem and obtain bounds on this number, both for the Hamming metric and for rank-metric errors. We describe decoding algorithms for both categories of errors.
This paper proposes a novel deep learning-based error correction coding scheme for AWGN channels under the constraint of one-bit quantization in the receivers. Specifically, it is first shown that the optimum error correction code that minimizes the probability of bit error can be obtained by perfectly training a special autoencoder, in which perfectly refers to converging the global minima. However, perfect training is not possible in most cases. To approach the performance of a perfectly trained autoencoder with a suboptimum training, we propose utilizing turbo codes as an implicit regularization, i.e., using a concatenation of a turbo code and an autoencoder. It is empirically shown that this design gives nearly the same performance as to the hypothetically perfectly trained autoencoder, and we also provide a theoretical proof of why that is so. The proposed coding method is as bandwidth efficient as the integrated (outer) turbo code, since the autoencoder exploits the excess bandwidth from pulse shaping and packs signals more intelligently thanks to sparsity in neural networks. Our results show that the proposed coding scheme at finite block lengths outperforms conventional turbo codes even for QPSK modulation. Furthermore, the proposed coding method can make one-bit quantization operational even for 16-QAM.
Motivated by mutation processes occurring in in-vivo DNA-storage applications, a channel that mutates stored strings by duplicating substrings as well as substituting symbols is studied. Two models of such a channel are considered: one in which the s ubstitutions occur only within the duplicated substrings, and one in which the location of substitutions is unrestricted. Both error-detecting and error-correcting codes are constructed, which can handle correctly any number of tandem duplications of a fixed length $k$, and at most a single substitution occurring at any time during the mutation process.
We consider the problem of characterizing network capacity in the presence of adversarial errors on network links,focusing in particular on the effect of low-capacity feedback links cross network cuts.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا