ترغب بنشر مسار تعليمي؟ اضغط هنا

Neural Communication Systems with Bandwidth-limited Channel

95   0   0.0 ( 0 )
 نشر من قبل Karen Ullrich
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Reliably transmitting messages despite information loss due to a noisy channel is a core problem of information theory. One of the most important aspects of real world communication, e.g. via wifi, is that it may happen at varying levels of information transfer. The bandwidth-limited channel models this phenomenon. In this study we consider learning coding with the bandwidth-limited channel (BWLC). Recently, neural communication models such as variational autoencoders have been studied for the task of source compression. We build upon this work by studying neural communication systems with the BWLC. Specifically,we find three modelling choices that are relevant under expected information loss. First, instead of separating the sub-tasks of compression (source coding) and error correction (channel coding), we propose to model both jointly. Framing the problem as a variational learning problem, we conclude that joint systems outperform their separate counterparts when coding is performed by flexible learnable function approximators such as neural networks. To facilitate learning, we introduce a differentiable and computationally efficient version of the bandwidth-limited channel. Second, we propose a design to model missing information with a prior, and incorporate this into the channel model. Finally, sampling from the joint model is improved by introducing auxiliary latent variables in the decoder. Experimental results justify the validity of our design decisions through improved distortion and FID scores.



قيم البحث

اقرأ أيضاً

For reliable transmission across a noisy communication channel, classical results from information theory show that it is asymptotically optimal to separate out the source and channel coding processes. However, this decomposition can fall short in th e finite bit-length regime, as it requires non-trivial tuning of hand-crafted codes and assumes infinite computational power for decoding. In this work, we propose to jointly learn the encoding and decoding processes using a new discrete variational autoencoder model. By adding noise into the latent codes to simulate the channel during training, we learn to both compress and error-correct given a fixed bit-length and computational budget. We obtain codes that are not only competitive against several separation schemes, but also learn useful robust representations of the data for downstream tasks such as classification. Finally, inference amortization yields an extremely fast neural decoder, almost an order of magnitude faster compared to standard decoding methods based on iterative belief propagation.
Wireless communications is nowadays an important aspect of robotics. There are many applications in which a robot must move to a certain goal point while transmitting information through a wireless channel which depends on the particular trajectory c hosen by the robot to reach the goal point. In this context, we develop a method to generate optimum trajectories which allow the robot to reach the goal point using little mechanical energy while transmitting as much data as possible. This is done by optimizing the trajectory (path and velocity profile) so that the robot consumes less energy while also offering good wireless channel conditions. We consider a realistic wireless channel model as well as a realistic dynamic model for the mobile robot (considered here to be a drone). Simulations results illustrate the merits of the proposed method.
Channel estimation is of crucial importance in massive multiple-input multiple-output (m-MIMO) visible light communication (VLC) systems. In order to tackle this problem, a fast and flexible denoising convolutional neural network (FFDNet)-based chann el estimation scheme for m-MIMO VLC systems was proposed. The channel matrix of the m-MIMO VLC channel is identified as a two-dimensional natural image since the channel has the characteristic of sparsity. A deep learning-enabled image denoising network FFDNet is exploited to learn from a large number of training data and to estimate the m-MIMO VLC channel. Simulation results demonstrate that our proposed channel estimation based on the FFDNet significantly outperforms the benchmark scheme based on minimum mean square error.
148 - Gen Li , Yuxin Chen , Yuejie Chi 2021
Low-complexity models such as linear function representation play a pivotal role in enabling sample-efficient reinforcement learning (RL). The current paper pertains to a scenario with value-based linear representation, which postulates the linear re alizability of the optimal Q-function (also called the linear $Q^{star}$ problem). While linear realizability alone does not allow for sample-efficient solutions in general, the presence of a large sub-optimality gap is a potential game changer, depending on the sampling mechanism in use. Informally, sample efficiency is achievable with a large sub-optimality gap when a generative model is available but is unfortunately infeasible when we turn to standard online RL settings. In this paper, we make progress towards understanding this linear $Q^{star}$ problem by investigating a new sampling protocol, which draws samples in an online/exploratory fashion but allows one to backtrack and revisit previous states in a controlled and infrequent manner. This protocol is more flexible than the standard online RL setting, while being practically relevant and far more restrictive than the generative model. We develop an algorithm tailored to this setting, achieving a sample complexity that scales polynomially with the feature dimension, the horizon, and the inverse sub-optimality gap, but not the size of the state/action space. Our findings underscore the fundamental interplay between sampling protocols and low-complexity structural representation in RL.
174 - Yi Sun , Hong Shen , Zhenguo Du 2021
A novel intercarrier interference (ICI)-aware orthogonal frequency division multiplexing (OFDM) channel estimation network ICINet is presented for rapidly time-varying channels. ICINet consists of two components: a preprocessing deep neural subnetwor k (PreDNN) and a cascaded residual learning-based neural subnetwork (CasResNet). By fully taking into account the impact of ICI, the proposed PreDNN first refines the initial channel estimates in a subcarrier-wise fashion. In addition, the CasResNet is designed to further enhance the estimation accuracy. The proposed cascaded network is compatible with any pilot patterns and robust against mismatched system configurations. Simulation results verify the superiority of ICINet over existing networks in terms of better performance and much less complexity.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا