ترغب بنشر مسار تعليمي؟ اضغط هنا

Residual-Aided End-to-End Learning of Communication System without Known Channel

99   0   0.0 ( 0 )
 نشر من قبل Hao Jiang
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Leveraging powerful deep learning techniques, the end-to-end (E2E) learning of communication system is able to outperform the classical communication system. Unfortunately, this communication system cannot be trained by deep learning without known channel. To deal with this problem, a generative adversarial network (GAN) based training scheme has been recently proposed to imitate the real channel. However, the gradient vanishing and overfitting problems of GAN will result in the serious performance degradation of E2E learning of communication system. To mitigate these two problems, we propose a residual aided GAN (RA-GAN) based training scheme in this paper. Particularly, inspired by the idea of residual learning, we propose a residual generator to mitigate the gradient vanishing problem by realizing a more robust gradient backpropagation. Moreover, to cope with the overfitting problem, we reconstruct the loss function for training by adding a regularizer, which limits the representation ability of RA-GAN. Simulation results show that the trained residual generator has better generation performance than the conventional generator, and the proposed RA-GAN based training scheme can achieve the near-optimal block error rate (BLER) performance with a negligible computational complexity increase in both the theoretical channel model and the ray-tracing based channel dataset.



قيم البحث

اقرأ أيضاً

In this paper, an unsupervised machine learning method for geometric constellation shaping is investigated. By embedding a differentiable fiber channel model within two neural networks, the learning algorithm is optimizing for a geometric constellati on shape. The learned constellations yield improved performance to state-of-the-art geometrically shaped constellations, and include an implicit trade-off between amplification noise and nonlinear effects. Further, the method allows joint optimization of system parameters, such as the optimal launch power, simultaneously with the constellation shape. An experimental demonstration validates the findings. Improved performances are reported, up to 0.13 bit/4D in simulation and experimentally up to 0.12 bit/4D.
147 - Xu Tan , Xiao-Lei Zhang 2020
Robust voice activity detection (VAD) is a challenging task in low signal-to-noise (SNR) environments. Recent studies show that speech enhancement is helpful to VAD, but the performance improvement is limited. To address this issue, here we propose a speech enhancement aided end-to-end multi-task model for VAD. The model has two decoders, one for speech enhancement and the other for VAD. The two decoders share the same encoder and speech separation network. Unlike the direct thought that takes two separated objectives for VAD and speech enhancement respectively, here we propose a new joint optimization objective -- VAD-masked scale-invariant source-to-distortion ratio (mSI-SDR). mSI-SDR uses VAD information to mask the output of the speech enhancement decoder in the training process. It makes the VAD and speech enhancement tasks jointly optimized not only at the shared encoder and separation network, but also at the objective level. It also satisfies real-time working requirement theoretically. Experimental results show that the multi-task method significantly outperforms its single-task VAD counterpart. Moreover, mSI-SDR outperforms SI-SDR in the same multi-task setting.
In this paper, we present an end-to-end training framework for building state-of-the-art end-to-end speech recognition systems. Our training system utilizes a cluster of Central Processing Units(CPUs) and Graphics Processing Units (GPUs). The entire data reading, large scale data augmentation, neural network parameter updates are all performed on-the-fly. We use vocal tract length perturbation [1] and an acoustic simulator [2] for data augmentation. The processed features and labels are sent to the GPU cluster. The Horovod allreduce approach is employed to train neural network parameters. We evaluated the effectiveness of our system on the standard Librispeech corpus [3] and the 10,000-hr anonymized Bixby English dataset. Our end-to-end speech recognition system built using this training infrastructure showed a 2.44 % WER on test-clean of the LibriSpeech test set after applying shallow fusion with a Transformer language model (LM). For the proprietary English Bixby open domain test set, we obtained a WER of 7.92 % using a Bidirectional Full Attention (BFA) end-to-end model after applying shallow fusion with an RNN-LM. When the monotonic chunckwise attention (MoCha) based approach is employed for streaming speech recognition, we obtained a WER of 9.95 % on the same Bixby open domain test set.
Parking lots (PLs) are usually full with cars. If these cars are formed into a self-organizing vehicular network, they can be new kind of road side units (RSUs) in urban area to provide communication data forwarding between mobile terminals nearby an d a base station. However cars in PLs can leave at any time, which is neglected in the existing studies. In this paper, we investigate relay cooperative communication based on parked cars in PLs. Taking the impact of the cars leaving behavior into consideration, we derive the expressions of outage probability in a two-hop cooperative communication and its link capacity. Finally, the numerical results show that the impact of a cars arriving time is greater than the impact of the duration the car has parked on outage probability.
The combination of deep neural network models and reinforcement learning algorithms can make it possible to learn policies for robotic behaviors that directly read in raw sensory inputs, such as camera images, effectively subsuming both estimation an d control into one model. However, real-world applications of reinforcement learning must specify the goal of the task by means of a manually programmed reward function, which in practice requires either designing the very same perception pipeline that end-to-end reinforcement learning promises to avoid, or else instrumenting the environment with additional sensors to determine if the task has been performed successfully. In this paper, we propose an approach for removing the need for manual engineering of reward specifications by enabling a robot to learn from a modest number of examples of successful outcomes, followed by actively solicited queries, where the robot shows the user a state and asks for a label to determine whether that state represents successful completion of the task. While requesting labels for every single state would amount to asking the user to manually provide the reward signal, our method requires labels for only a tiny fraction of the states seen during training, making it an efficient and practical approach for learning skills without manually engineered rewards. We evaluate our method on real-world robotic manipulation tasks where the observations consist of images viewed by the robots camera. In our experiments, our method effectively learns to arrange objects, place books, and drape cloth, directly from images and without any manually specified reward functions, and with only 1-4 hours of interaction with the real world.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا