ترغب بنشر مسار تعليمي؟ اضغط هنا

Reinforcement Learning for Efficient and Tuning-Free Link Adaptation

133   0   0.0 ( 0 )
 نشر من قبل Vidit Saxena
 تاريخ النشر 2020
والبحث باللغة English




اسأل ChatGPT حول البحث

Wireless links adapt the data transmission parameters to the dynamic channel state -- this is called link adaptation. Classical link adaptation relies on tuning parameters that are challenging to configure for optimal link performance. Recently, reinforcement learning has been proposed to automate link adaptation, where the transmission parameters are modeled as discrete arms of a multi-armed bandit. In this context, we propose a latent learning model for link adaptation that exploits the correlation between data transmission parameters. Further, motivated by the recent success of Thompson sampling for multi-armed bandit problems, we propose a latent Thompson sampling (LTS) algorithm that quickly learns the optimal parameters for a given channel state. We extend LTS to fading wireless channels through a tuning-free mechanism that automatically tracks the channel dynamics. In numerical evaluations with fading wireless channels, LTS improves the link throughout by up to 100% compared to the state-of-the-art link adaptation algorithms.

قيم البحث

اقرأ أيضاً

Spectrum sharing among users is a fundamental problem in the management of any wireless network. In this paper, we discuss the problem of distributed spectrum collaboration without central management under general unknown channels. Since the cost of communication, coordination and control is rapidly increasing with the number of devices and the expanding bandwidth used there is an obvious need to develop distributed techniques for spectrum collaboration where no explicit signaling is used. In this paper, we combine game-theoretic insights with deep Q-learning to provide a novel asymptotically optimal solution to the spectrum collaboration problem. We propose a deterministic distributed deep reinforcement learning(D3RL) mechanism using a deep Q-network (DQN). It chooses the channels using the Q-values and the channel loads while limiting the options available to the user to a few channels with the highest Q-values and among those, it selects the least loaded channel. Using insights from both game theory and combinatorial optimization we show that this technique is asymptotically optimal for large overloaded networks. The selected channel and the outcome of the successful transmission are fed back into the learning of the deep Q-network to incorporate it into the learning of the Q-values. We also analyzed performance to understand the behavior of D3RL in differ
64 - Cong Shen , Jie Xu , Sihui Zheng 2021
We advocate a new resource allocation framework, which we term resource rationing, for wireless federated learning (FL). Unlike existing resource allocation methods for FL, resource rationing focuses on balancing resources across learning rounds so t hat their collective impact on the federated learning performance is explicitly captured. This new framework can be integrated seamlessly with existing resource allocation schemes to optimize the convergence of FL. In particular, a novel later-is-better principle is at the front and center of resource rationing, which is validated empirically in several instances of wireless FL. We also point out technical challenges and research opportunities that are worth pursuing. Resource rationing highlights the benefits of treating the emerging FL as a new class of service that has its own characteristics, and designing communication algorithms for this particular service.
In this paper, the problem of minimizing energy and time consumption for task computation and transmission is studied in a mobile edge computing (MEC)-enabled balloon network. In the considered network, each user needs to process a computational task in each time instant, where high-altitude balloons (HABs), acting as flying wireless base stations, can use their powerful computational abilities to process the tasks offloaded from their associated users. Since the data size of each users computational task varies over time, the HABs must dynamically adjust the user association, service sequence, and task partition scheme to meet the users needs. This problem is posed as an optimization problem whose goal is to minimize the energy and time consumption for task computing and transmission by adjusting the user association, service sequence, and task allocation scheme. To solve this problem, a support vector machine (SVM)-based federated learning (FL) algorithm is proposed to determine the user association proactively. The proposed SVM-based FL method enables each HAB to cooperatively build an SVM model that can determine all user associations without any transmissions of either user historical associations or computational tasks to other HABs. Given the prediction of the optimal user association, the service sequence and task allocation of each user can be optimized so as to minimize the weighted sum of the energy and time consumption. Simulations with real data of city cellular traffic from the OMNILab at Shanghai Jiao Tong University show that the proposed algorithm can reduce the weighted sum of the energy and time consumption of all users by up to 16.1% compared to a conventional centralized method.
61 - Songyan Xue , Yi Ma , Na Yi 2020
This paper aims to handle the joint transmitter and noncoherent receiver design for multiuser multiple-input multiple-output (MU-MIMO) systems through deep learning. Given the deep neural network (DNN) based noncoherent receiver, the novelty of this work mainly lies in the multiuser waveform design at the transmitter side. According to the signal format, the proposed deep learning solutions can be divided into two groups. One group is called pilot-aided waveform, where the information-bearing symbols are time-multiplexed with the pilot symbols. The other is called learning-based waveform, where the multiuser waveform is partially or even completely designed by deep learning algorithms. Specifically, if the information-bearing symbols are directly embedded in the waveform, it is called systematic waveform. Otherwise, it is called non-systematic waveform, where no artificial design is involved. Simulation results show that the pilot-aided waveform design outperforms the conventional zero forcing receiver with least squares (LS) channel estimation on small-size MU-MIMO systems. By exploiting the time-domain degrees of freedom (DoF), the learning-based waveform design further improves the detection performance by at least 5 dB at high signal-to-noise ratio (SNR) range. Moreover, it is found that the traditional weight initialization method might cause a training imbalance among different users in the learning-based waveform design. To tackle this issue, a novel weight initialization method is proposed which provides a balanced convergence performance with no complexity penalty.
In this paper, a joint task, spectrum, and transmit power allocation problem is investigated for a wireless network in which the base stations (BSs) are equipped with mobile edge computing (MEC) servers to jointly provide computational and communicat ion services to users. Each user can request one computational task from three types of computational tasks. Since the data size of each computational task is different, as the requested computational task varies, the BSs must adjust their resource (subcarrier and transmit power) and task allocation schemes to effectively serve the users. This problem is formulated as an optimization problem whose goal is to minimize the maximal computational and transmission delay among all users. A multi-stack reinforcement learning (RL) algorithm is developed to solve this problem. Using the proposed algorithm, each BS can record the historical resource allocation schemes and users information in its multiple stacks to avoid learning the same resource allocation scheme and users states, thus improving the convergence speed and learning efficiency. Simulation results illustrate that the proposed algorithm can reduce the number of iterations needed for convergence and the maximal delay among all users by up to 18% and 11.1% compared to the standard Q-learning algorithm.

الأسئلة المقترحة

التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا