Multiple Stopping Time POMDPs: Structural Results & Application in Interactive Advertising in Social Media


Abstract in English

This paper considers a multiple stopping time problem for a Markov chain observed in noise, where a decision maker chooses at most L stopping times to maximize a cumulative objective. We formulate the problem as a Partially Observed Markov Decision Process (POMDP) and derive structural results for the optimal multiple stopping policy. The main results are as follows: i) The optimal multiple stopping policy is shown to be characterized by threshold curves in the unit simplex of Bayesian Posteriors. ii) The stopping setsl (defined by the threshold curves) are shown to exhibit a nested structure. iii) The optimal cumulative reward is shown to be monotone with respect to the copositive ordering of the transition matrix. iv) A stochastic gradient algorithm is provided for estimating linear threshold policies by exploiting the structural results. These linear threshold policies approximate the threshold curves, and share the monotone structure of the optimal multiple stopping policy. As an illustrative example, we apply the multiple stopping framework to interactively schedule advertisements in live online social media. It is shown that advertisement scheduling using multiple stopping performs significantly better than currently used methods.

Download