Do you want to publish a course? Click here

Pareto Deterministic Policy Gradients and Its Application in 5G Massive MIMO Networks

88   0   0.0 ( 0 )
 Added by Zhou Zhou
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

In this paper, we consider jointly optimizing cell load balance and network throughput via a reinforcement learning (RL) approach, where inter-cell handover (i.e., user association assignment) and massive MIMO antenna tilting are configured as the RL policy to learn. Our rationale behind using RL is to circumvent the challenges of analytically modeling user mobility and network dynamics. To accomplish this joint optimization, we integrate vector rewards into the RL value network and conduct RL action via a separate policy network. We name this method as Pareto deterministic policy gradients (PDPG). It is an actor-critic, model-free and deterministic policy algorithm which can handle the coupling objectives with the following two merits: 1) It solves the optimization via leveraging the degree of freedom of vector reward as opposed to choosing handcrafted scalar-reward; 2) Cross-validation over multiple policies can be significantly reduced. Accordingly, the RL enabled network behaves in a self-organized way: It learns out the underlying user mobility through measurement history to proactively operate handover and antenna tilt without environment assumptions. Our numerical evaluation demonstrates that the introduced RL method outperforms scalar-reward based approaches. Meanwhile, to be self-contained, an ideal static optimization based brute-force search solver is included as a benchmark. The comparison shows that the RL approach performs as well as this ideal strategy, though the former one is constrained with limited environment observations and lower action frequency, whereas the latter ones have full access to the user mobility. The convergence of our introduced approach is also tested under different user mobility environment based on our measurement data from a real scenario.



rate research

Read More

Reinforcement learning algorithms such as the deep deterministic policy gradient algorithm (DDPG) has been widely used in continuous control tasks. However, the model-free DDPG algorithm suffers from high sample complexity. In this paper we consider the deterministic value gradients to improve the sample efficiency of deep reinforcement learning algorithms. Previous works consider deterministic value gradients with the finite horizon, but it is too myopic compared with infinite horizon. We firstly give a theoretical guarantee of the existence of the value gradients in this infinite setting. Based on this theoretical guarantee, we propose a class of the deterministic value gradient algorithm (DVG) with infinite horizon, and different rollout steps of the analytical gradients by the learned model trade off between the variance of the value gradients and the model bias. Furthermore, to better combine the model-based deterministic value gradient estimators with the model-free deterministic policy gradient estimator, we propose the deterministic value-policy gradient (DVPG) algorithm. We finally conduct extensive experiments comparing DVPG with state-of-the-art methods on several standard continuous control benchmarks. Results demonstrate that DVPG substantially outperforms other baselines.
A widely-used actor-critic reinforcement learning algorithm for continuous control, Deep Deterministic Policy Gradients (DDPG), suffers from the overestimation problem, which can negatively affect the performance. Although the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm mitigates the overestimation issue, it can lead to a large underestimation bias. In this paper, we propose to use the Boltzmann softmax operator for value function estimation in continuous control. We first theoretically analyze the softmax operator in continuous action space. Then, we uncover an important property of the softmax operator in actor-critic algorithms, i.e., it helps to smooth the optimization landscape, which sheds new light on the benefits of the operator. We also design two new algorithms, Softmax Deep Deterministic Policy Gradients (SD2) and Softmax Deep Double Deterministic Policy Gradients (SD3), by building the softmax operator upon single and double estimators, which can effectively improve the overestimation and underestimation bias. We conduct extensive experiments on challenging continuous control tasks, and results show that SD3 outperforms state-of-the-art methods.
We study a reinforcement learning setting, where the state transition function is a convex combination of a stochastic continuous function and a deterministic function. Such a setting generalizes the widely-studied stochastic state transition setting, namely the setting of deterministic policy gradient (DPG). We firstly give a simple example to illustrate that the deterministic policy gradient may be infinite under deterministic state transitions, and introduce a theoretical technique to prove the existence of the policy gradient in this generalized setting. Using this technique, we prove that the deterministic policy gradient indeed exists for a certain set of discount factors, and further prove two conditions that guarantee the existence for all discount factors. We then derive a closed form of the policy gradient whenever exists. Furthermore, to overcome the challenge of high sample complexity of DPG in this setting, we propose the Generalized Deterministic Policy Gradient (GDPG) algorithm. The main innovation of the algorithm is a new method of applying model-based techniques to the model-free algorithm, the deep deterministic policy gradient algorithm (DDPG). GDPG optimize the long-term rewards of the model-based augmented MDP subject to a constraint that the long-rewards of the MDP is less than the original one. We finally conduct extensive experiments comparing GDPG with state-of-the-art methods and the direct model-based extension method of DDPG on several standard continuous control benchmarks. Results demonstrate that GDPG substantially outperforms DDPG, the model-based extension of DDPG and other baselines in terms of both convergence and long-term rewards in most environments.
In this paper, we introduce a new neural network (NN) structure, multi-mode reservoir computing (Multi-Mode RC). It inherits the dynamic mechanism of RC and processes the forward path and loss optimization of the NN using tensor as the underlying data format. Multi-Mode RC exhibits less complexity compared with conventional RC structures (e.g. single-mode RC) with comparable generalization performance. Furthermore, we introduce an alternating least square-based learning algorithm for Multi-Mode RC as well as conduct the associated theoretical analysis. The result can be utilized to guide the configuration of NN parameters to sufficiently circumvent over-fitting issues. As a key application, we consider the symbol detection task in multiple-input-multiple-output (MIMO) orthogonal-frequency-division-multiplexing (OFDM) systems with massive MIMO employed at the base stations (BSs). Thanks to the tensor structure of massive MIMO-OFDM signals, our online learning-based symbol detection method generalizes well in terms of bit error rate even using a limited online training set. Evaluation results suggest that the Multi-Mode RC-based learning framework can efficiently and effectively combat practical constraints of wireless systems (i.e. channel state information (CSI) errors and hardware non-linearity) to enable robust and adaptive learning-based communications over the air.
A reinforcement learning agent that needs to pursue different goals across episodes requires a goal-conditional policy. In addition to their potential to generalize desirable behavior to unseen goals, such policies may also enable higher-level planning based on subgoals. In sparse-reward environments, the capacity to exploit information about the degree to which an arbitrary goal has been achieved while another goal was intended appears crucial to enable sample efficient learning. However, reinforcement learning agents have only recently been endowed with such capacity for hindsight. In this paper, we demonstrate how hindsight can be introduced to policy gradient methods, generalizing this idea to a broad class of successful algorithms. Our experiments on a diverse selection of sparse-reward environments show that hindsight leads to a remarkable increase in sample efficiency.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا