Identity Concealment Games: How I Learned to Stop Revealing and Love the Coincidences


الملخص بالإنكليزية

In an adversarial environment, a hostile player performing a task may behave like a non-hostile one in order not to reveal its identity to an opponent. To model such a scenario, we define identity concealment games: zero-sum stochastic reachability games with a zero-sum objective of identity concealment. To measure the identity concealment of the player, we introduce the notion of an average player. The average players policy represents the expected behavior of a non-hostile player. We show that there exists an equilibrium policy pair for every identity concealment game and give the optimality equations to synthesize an equilibrium policy pair. If the players opponent follows a non-equilibrium policy, the player can hide its identity better. For this reason, we study how the hostile player may learn the opponents policy. Since learning via exploration policies would quickly reveal the hostile players identity to the opponent, we consider the problem of learning a near-optimal policy for the hostile player using the game runs collected under the average players policy. Consequently, we propose an algorithm that provably learns a near-optimal policy and give an upper bound on the number of sample runs to be collected.

تحميل البحث