ترغب بنشر مسار تعليمي؟ اضغط هنا

Upper approximating probabilities of convergence in probabilistic coherence spaces

62   0   0.0 ( 0 )
 نشر من قبل Thomas Ehrhard
 تاريخ النشر 2020
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Thomas Ehrhard




اسأل ChatGPT حول البحث

We develop a theory of probabilistic coherence spaces equipped with an additional extensional structure and apply it to approximating probability of convergence of ground type programs of probabilistic PCF whose free variables are of ground types. To this end we define an adapted version of Krivine Machine which computes polynomial approximations of the semantics of these programs in the model. These polynomials provide approximations from below and from above of probabilities of convergence; this is made possible by extending the language with an error symbol which is extensionally maximal in the model.



قيم البحث

اقرأ أيضاً

83 - Thomas Ehrhard 2019
In probabilistic coherence spaces, a denotational model of probabilistic functional languages, mor-phisms are analytic and therefore smooth. We explore two related applications of the corresponding derivatives. First we show how derivatives allow to compute the expectation of execution time in the weak head reduction of probabilistic PCF (pPCF). Next we apply a general notion of local differential of morphisms to the proof of a Lipschitz property of these morphisms allowing in turn to relate the observational distance on pPCF terms to a distance the model is naturally equipped with. This suggests that extending probabilistic programming languages with derivatives, in the spirit of the differential lambda-calculus, could be quite meaningful.
85 - Thomas Ehrhard 2020
In probabilistic coherence spaces, a denotational model of probabilistic functional languages, morphisms are analytic and therefore smooth. We explore two related applications of the corresponding derivatives. First we show how derivatives allow to c ompute the expectation of execution time in the weak head reduction of probabilistic PCF (pPCF). Next we apply a general notion of local differential of morphisms to the proof of a Lipschitz property of these morphisms allowing in turn to relate the observational distance on pPCF terms to a distance the model is naturally equipped with. This suggests that extending probabilistic programming languages with derivatives, in the spirit of the differential lambda-calculus, could be quite meaningful.
We add probabilistic features to basic thread algebra and its extensions with thread-service interaction and strategic interleaving. Here, threads represent the behaviours produced by instruction sequences under execution and services represent the b ehaviours exhibited by the components of execution environments of instruction sequences. In a paper concerned with probabilistic instruction sequences, we proposed several kinds of probabilistic instructions and gave an informal explanation for each of them. The probabilistic features added to the extension of basic thread algebra with thread-service interaction make it possible to give a formal explanation in terms of non-probabilistic instructions and probabilistic services. The probabilistic features added to the extensions of basic thread algebra with strategic interleaving make it possible to cover strategies corresponding to probabilistic scheduling algorithms.
A notion of probabilistic lambda-calculus usually comes with a prescribed reduction strategy, typically call-by-name or call-by-value, as the calculus is non-confluent and these strategies yield different results. This is a break with one of the main advantages of lambda-calculus: confluence, which means results are independent from the choice of strategy. We present a probabilistic lambda-calculus where the probabilistic operator is decomposed into two syntactic constructs: a generator, which represents a probabilistic event; and a consumer, which acts on the term depending on a given event. The resulting calculus, the Probabilistic Event Lambda-Calculus, is confluent, and interprets the call-by-name and call-by-value strategies through different interpretations of the probabilistic operator into our generator and consumer constructs. We present two notions of reduction, one via fine-grained local rewrite steps, and one by generation and consumption of probabilistic events. Simple types for the calculus are essentially standard, and they convey strong normalization. We demonstrate how we can encode call-by-name and call-by-value probabilistic evaluation.
This paper investigates the usage of generating functions (GFs) encoding measures over the program variables for reasoning about discrete probabilistic programs. To that end, we define a denotational GF-transformer semantics for probabilistic while-p rograms, and show that it instantiates Kozens seminal distribution transformer semantics. We then study the effective usage of GFs for program analysis. We show that finitely expressible GFs enable checking super-invariants by means of computer algebra tools, and that they can be used to determine termination probabilities. The paper concludes by characterizing a class of -- possibly infinite-state -- programs whose semantics is a rational GF encoding a discrete phase-type distribution.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا