ﻻ يوجد ملخص باللغة العربية
Policy gradient methods are extensively used in reinforcement learning as a way to optimize expected return. In this paper, we explore the evolution of the policy parameters, for a special class of exactly solvable POMDPs, as a continuous-state Markov chain, whose transition probabilities are determined by the gradient of the distribution of the policys value. Our approach relies heavily on random walk theory, specifically on affine Weyl groups. We construct a class of novel partially observable environments with controllable exploration difficulty, in which the value distribution, and hence the policy parameter evolution, can be derived analytically. Using these environments, we analyze the probabilistic convergence of policy gradient to different local maxima of the value function. To our knowledge, this is the first approach developed to analytically compute the landscape of policy gradient in POMDPs for a class of such environments, leading to interesting insights into the difficulty of this problem.
We study diffusion of hardcore particles on a one dimensional periodic lattice subjected to a constraint that the separation between any two consecutive particles does not increase beyond a fixed value $(n+1);$ initial separation larger than $(n+1)$
In this paper a review is given of a class of sub-models of both approaches, characterized by the fact that they can be solved exactly, highlighting in the process a number of generic results related to both the nature of pair-correlated systems as w
We introduce and solve a model of hardcore particles on a one dimensional periodic lattice which undergoes an active-absorbing state phase transition at finite density. In this model an occupied site is defined to be active if its left neighbour is o
Some results for two distinct but complementary exactly solvable algebraic models for pairing in atomic nuclei are presented: 1) binding energy predictions for isotopic chains of nuclei based on an extended pairing model that includes multi-pair exci
Reinforcement learning (RL) algorithms still suffer from high sample complexity despite outstanding recent successes. The need for intensive interactions with the environment is especially observed in many widely popular policy gradient algorithms th