Abstract Hidden Markov Models: a monadic account of quantitative information flow


Abstract in English

Hidden Markov Models, HMMs, are mathematical models of Markov processes with state that is hidden, but from which information can leak. They are typically represented as 3-way joint-probability distributions. We use HMMs as denotations of probabilistic hidden-state sequential programs: for that, we recast them as `abstract HMMs, computations in the Giry monad $mathbb{D}$, and we equip them with a partial order of increasing security. However to encode the monadic type with hiding over some state $mathcal{X}$ we use $mathbb{D}mathcal{X}to mathbb{D}^2mathcal{X}$ rather than the conventional $mathcal{X}{to}mathbb{D}mathcal{X}$ that suffices for Markov models whose state is not hidden. We illustrate the $mathbb{D}mathcal{X}to mathbb{D}^2mathcal{X}$ construction with a small Haskell prototype. We then present uncertainty measures as a generalisation of the extant diversity of probabilistic entropies, with characteristic analytic properties for them, and show how the new entropies interact with the order of increasing security. Furthermore, we give a `backwards uncertainty-transformer semantics for HMMs that is dual to the `forwards abstract HMMs - it is an analogue of the duality between forwards, relational semantics and backwards, predicate-transformer semantics for imperative programs with demonic choice. Finally, we argue that, from this new denotational-semantic viewpoint, one can see that the Dalenius desideratum for statistical databases is actually an issue in compositionality. We propose a means for taking it into account.

Download