ترغب بنشر مسار تعليمي؟ اضغط هنا

To Learn or Not to Learn: Deep Learning Assisted Wireless Modem Design

92   0   0.0 ( 0 )
 نشر من قبل Songyan Xue
 تاريخ النشر 2019
  مجال البحث هندسة إلكترونية
والبحث باللغة English




اسأل ChatGPT حول البحث

Deep learning is driving a radical paradigm shift in wireless communications, all the way from the application layer down to the physical layer. Despite this, there is an ongoing debate as to what additional values artificial intelligence (or machine learning) could bring to us, particularly on the physical layer design; and what penalties there may have? These questions motivate a fundamental rethinking of the wireless modem design in the artificial intelligence era. Through several physical-layer case studies, we argue for a significant role that machine learning could play, for instance in parallel error-control coding and decoding, channel equalization, interference cancellation, as well as multiuser and multiantenna detection. In addition, we will also discuss the fundamental bottlenecks of machine learning as well as their potential solutions in this paper.



قيم البحث

اقرأ أيضاً

Learning how to act when there are many available actions in each state is a challenging task for Reinforcement Learning (RL) agents, especially when many of the actions are redundant or irrelevant. In such cases, it is sometimes easier to learn whic h actions not to take. In this work, we propose the Action-Elimination Deep Q-Network (AE-DQN) architecture that combines a Deep RL algorithm with an Action Elimination Network (AEN) that eliminates sub-optimal actions. The AEN is trained to predict invalid actions, supervised by an external elimination signal provided by the environment. Simulations demonstrate a considerable speedup and added robustness over vanilla DQN in text-based games with over a thousand discrete actions.
We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding.
Wireless sensor networks (WSN) acts as the backbone of Internet of Things (IoT) technology. In WSN, field sensing and fusion are the most commonly seen problems, which involve collecting and processing of a huge volume of spatial samples in an unknow n field to reconstruct the field or extract its features. One of the major concerns is how to reduce the communication overhead and data redundancy with prescribed fusion accuracy. In this paper, an integrated communication and computation framework based on meta-learning is proposed to enable adaptive field sensing and reconstruction. It consists of a stochastic-gradient-descent (SGD) based base-learner used for the field model prediction aiming to minimize the average prediction error, and a reinforcement meta-learner aiming to optimize the sensing decision by simultaneously rewarding the error reduction with samples obtained so far and penalizing the corresponding communication cost. An adaptive sensing algorithm based on the above two-layer meta-learning framework is presented. It actively determines the next most informative sensing location, and thus considerably reduces the spatial samples and yields superior performance and robustness compared with conventional schemes. The convergence behavior of the proposed algorithm is also comprehensively analyzed and simulated. The results reveal that the proposed field sensing algorithm significantly improves the convergence rate.
In this paper we present an end-to-end meta-learned system for image compression. Traditional machine learning based approaches to image compression train one or more neural network for generalization performance. However, at inference time, the enco der or the latent tensor output by the encoder can be optimized for each test image. This optimization can be regarded as a form of adaptation or benevolent overfitting to the input content. In order to reduce the gap between training and inference conditions, we propose a new training paradigm for learned image compression, which is based on meta-learning. In a first phase, the neural networks are trained normally. In a second phase, the Model-Agnostic Meta-learning approach is adapted to the specific case of image compression, where the inner-loop performs latent tensor overfitting, and the outer loop updates both encoder and decoder neural networks based on the overfitting performance. Furthermore, after meta-learning, we propose to overfit and cluster the bias terms of the decoder on training image patches, so that at inference time the optimal content-specific bias terms can be selected at encoder-side. Finally, we propose a new probability model for lossless compression, which combines concepts from both multi-scale and super-resolution probability model approaches. We show the benefits of all our proposed ideas via carefully designed experiments.
Continual lifelong learning requires an agent or model to learn many sequentially ordered tasks, building on previous knowledge without catastrophically forgetting it. Much work has gone towards preventing the default tendency of machine learning mod els to catastrophically forget, yet virtually all such work involves manually-designed solutions to the problem. We instead advocate meta-learning a solution to catastrophic forgetting, allowing AI to learn to continually learn. Inspired by neuromodulatory processes in the brain, we propose A Neuromodulated Meta-Learning Algorithm (ANML). It differentiates through a sequential learning process to meta-learn an activation-gating function that enables context-dependent selective activation within a deep neural network. Specifically, a neuromodulatory (NM) neural network gates the forward pass of another (otherwise normal) neural network called the prediction learning network (PLN). The NM network also thus indirectly controls selective plasticity (i.e. the backward pass of) the PLN. ANML enables continual learning without catastrophic forgetting at scale: it produces state-of-the-art continual learning performance, sequentially learning as many as 600 classes (over 9,000 SGD updates).
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا