Minimizing the Information Leakage Regarding High-Level Task Specifications


Abstract in English

We consider a scenario in which an autonomous agent carries out a mission in a stochastic environment while passively observed by an adversary. For the agent, minimizing the information leaked to the adversary regarding its high-level specification is critical in creating an informational advantage. We express the specification of the agent as a parametric linear temporal logic formula, measure the information leakage by the adversarys confidence in the agents mission specification, and propose algorithms to synthesize a policy for the agent which minimizes the information leakage to the adversary. In the scenario considered, the adversary aims to infer the specification of the agent from a set of candidate specifications, each of which has an associated likelihood probability. The agents objective is to synthesize a policy that maximizes the entropy of the adversarys likelihood distribution while satisfying its specification. We propose two approaches to solve the resulting synthesis problem. The first approach computes the exact satisfaction probabilities for each candidate specification, whereas the second approach utilizes the Frechet inequalities to approximate them. For each approach, we formulate a mixed-integer program with a quasiconcave objective function. We solve the problem using a bisection algorithm. Finally, we compare the performance of both approaches on numerical simulations.

Download