Do you want to publish a course? Click here

Latticed $k$-Induction with an Application to Probabilistic Programs

85   0   0.0 ( 0 )
 Added by Kevin Batz
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

We revisit two well-established verification techniques, $k$-induction and bounded model checking (BMC), in the more general setting of fixed point theory over complete lattices. Our main theoretical contribution is latticed $k$-induction, which (i) generalizes classical $k$-induction for verifying transition systems, (ii) generalizes Park induction for bounding fixed points of monotonic maps on complete lattices, and (iii) extends from naturals $k$ to transfinite ordinals $kappa$, thus yielding $kappa$-induction. The lattice-theoretic understanding of $k$-induction and BMC enables us to apply both techniques to the fully automatic verification of infinite-state probabilistic programs. Our prototypical implementation manages to automatically verify non-trivial specifications for probabilistic programs taken from the literature that - using existing techniques - cannot be verified without synthesizing a stronger inductive invariant first.

rate research

Read More

We study weakest precondition reasoning about the (co)variance of outcomes and the variance of run-times of probabilistic programs with conditioning. For outcomes, we show that approximating (co)variances is computationally more difficult than approximating expected values. In particular, we prove that computing both lower and upper bounds for (co)variances is $Sigma^{0}_{2}$-complete. As a consequence, neither lower nor upper bounds are computably enumerable. We therefore present invariant-based techniques that do enable enumeration of both upper and lower bounds, once appropriate invariants are found. Finally, we extend this approach to reasoning about run-time variances.
This paper investigates the usage of generating functions (GFs) encoding measures over the program variables for reasoning about discrete probabilistic programs. To that end, we define a denotational GF-transformer semantics for probabilistic while-programs, and show that it instantiates Kozens seminal distribution transformer semantics. We then study the effective usage of GFs for program analysis. We show that finitely expressible GFs enable checking super-invariants by means of computer algebra tools, and that they can be used to determine termination probabilities. The paper concludes by characterizing a class of -- possibly infinite-state -- programs whose semantics is a rational GF encoding a discrete phase-type distribution.
In this chapter, we explore how (Type-2) computable distributions can be used to give both (algorithmic) sampling and distributional semantics to probabilistic programs with continuous distributions. Towards this end, we sketch an encoding of computable distributions in a fragment of Haskell and show how topological domains can be used to model the resulting PCF-like language. We also examine the implications that a (Type-2) computable semantics has for implementing conditioning. We hope to draw out the connection between an approach based on (Type-2) computability and ordinary programming throughout the chapter as well as highlight the relation with constructive mathematics (via realizability).
The correct by construction paradigm is an important component of modern Formal Methods, and here we use the probabilistic Guarded-Command Language $mathit{pGCL}$ to illustrate its application to $mathit{probabilistic}$ programming. $mathit{pGCL}$ extends Dijkstras guarded-command language $mathit{GCL}$ with probabilistic choice, and is equipped with a correctness-preserving refinement relation $(sqsubseteq)$ that enables compact, abstract specifications of probabilistic properties to be transformed gradually to concrete, executable code by applying mathematical insights in a systematic and layered way. Characteristically for correctness by construction, as far as possible the reasoning in each refinement-step layer does not depend on earlier layers, and does not affect later ones. We demonstrate the technique by deriving a fair-coin implementation of any given discrete probability distribution. In the special case of simulating a fair die, our correct-by-construction algorithm turns out to be within spitting distance of Knuth and Yaos optimal solution.
This paper presents a novel method for the automated synthesis of probabilistic programs. The starting point is a program sketch representing a finite family of finite-state Markov chains with related but distinct topologies, and a PCTL specification. The method builds on a novel inductive oracle that greedily generates counter-examples (CEs) for violating programs and uses them to prune the family. These CEs leverage the semantics of the family in the form of bounds on its best- and worst-case behaviour provided by a deductive oracle using an MDP abstraction. The method further monitors the performance of the synthesis and adaptively switches between the inductive and deductive reasoning. Our experiments demonstrate that the novel CE construction provides a significantly faster and more effective pruning strategy leading to acceleration of the synthesis process on a wide range of benchmarks. For challenging problems, such as the synthesis of decentralized partially-observable controllers, we reduce the run-time from a day to minutes.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا