ترغب بنشر مسار تعليمي؟ اضغط هنا

Deepcode and Modulo-SK are Designed for Different Settings

80   0   0.0 ( 0 )
 نشر من قبل Hyeji Kim
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We respond to [1] which claimed that Modulo-SK scheme outperforms Deepcode [2]. We demonstrate that this statement is not true: the two schemes are designed and evaluated for entirely different settings. DeepCode is designed and evaluated for the AWGN channel with (potentially delayed) uncoded output feedback. Modulo-SK is evaluated on the AWGN channel with coded feedback and unit delay. [1] also claimed an implementation of Schalkwijk and Kailath (SK) [3] which was numerically stable for any number of information bits and iterations. However, we observe that while their implementation does marginally improve over ours, it also suffers from a fundamental issue with precision. Finally, we show that Deepcode dominates the optimized performance of SK, over a natural choice of parameterizations when the feedback is noisy.



قيم البحث

اقرأ أيضاً

Lattice coding and decoding have been shown to achieve the capacity of the additive white Gaussian noise (AWGN) channel. This was accomplished using a minimum mean-square error scaling and randomization to transform the AWGN channel into a modulo-lat tice additive noise channel of the same capacity. It has been further shown that when operating at rates below capacity but above the critical rate of the channel, there exists a rate-dependent scaling such that the associated modulo-lattice channel attains the error exponent of the AWGN channel. A geometric explanation for this result is developed. In particular, it is shown how the geometry of typical error events for the modulo-lattice channel coincides with that of a spherical code for the AWGN channel.
169 - Uri Erez , Ram Zamir 2008
A simple lemma is derived that allows to transform a general scalar (non-Gaussian, non-additive) continuous-alphabet channel as well as a general multiple-access channel into a modulo-additive noise channel. While in general the transformation is inf ormation lossy, it allows to leverage linear coding techniques and capacity results derived for networks comprised of additive Gaussian nodes to more general networks.
282 - Yuval Kochman , Ram Zamir 2008
The combination of source coding with decoder side-information (Wyner-Ziv problem) and channel coding with encoder side-information (Gelfand-Pinsker problem) can be optimally solved using the separation principle. In this work we show an alternative scheme for the quadratic-Gaussian case, which merges source and channel coding. This scheme achieves the optimal performance by a applying modulo-lattice modulation to the analog source. Thus it saves the complexity of quantization and channel decoding, and remains with the task of shaping only. Furthermore, for high signal-to-noise ratio (SNR), the scheme approaches the optimal performance using an SNR-independent encoder, thus it is robust to unknown SNR at the encoder.
In this paper, the problem of securely computing a function over the binary modulo-2 adder multiple-access wiretap channel is considered. The problem involves a legitimate receiver that wishes to reliably and efficiently compute a function of distrib uted binary sources while an eavesdropper has to be kept ignorant of them. In order to characterize the corresponding fundamental limit, the notion of secrecy computation-capacity is introduced. Although determining the secrecy computation-capacity is challenging for arbitrary functions, it surprisingly turns out that if the function perfectly matches the algebraic structure of the channel and the joint source distribution fulfills certain conditions, the secrecy computation-capacity equals the computation capacity, which is the supremum of all achievable computation rates without secrecy constraints. Unlike the case of securely transmitting messages, no additional randomness is needed at the encoders nor does the legitimate receiver need any advantage over the eavesdropper. The results therefore show that the problem of securely computing a function over a multiple-access wiretap channel may significantly differ from the one of securely communicating messages.
The sparsity and compressibility of finite-dimensional signals are of great interest in fields such as compressed sensing. The notion of compressibility is also extended to infinite sequences of i.i.d. or ergodic random variables based on the observe d error in their nonlinear k-term approximation. In this work, we use the entropy measure to study the compressibility of continuous-domain innovation processes (alternatively known as white noise). Specifically, we define such a measure as the entropy limit of the doubly quantized (time and amplitude) process. This provides a tool to compare the compressibility of various innovation processes. It also allows us to identify an analogue of the concept of entropy dimension which was originally defined by Renyi for random variables. Particular attention is given to stable and impulsive Poisson innovation processes. Here, our results recognize Poisson innovations as the more compressible ones with an entropy measure far below that of stable innovations. While this result departs from the previous knowledge regarding the compressibility of fat-tailed distributions, our entropy measure ranks stable innovations according to their tail decay.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا