ترغب بنشر مسار تعليمي؟ اضغط هنا

An implementation of flow calculus for complexity analysis (tool paper)

64   0   0.0 ( 0 )
 نشر من قبل Clement Aubert
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Abstract. We present a tool to automatically perform the data-size analysis of imperative programs written in C. This tool, called pymwp, is inspired by a classical work on complexity analysis [10], and allows to certify that the size of the values computed by a program will be bounded by a polynomial in the programs inputs. Strategies to provide meaningful feedback on non-polynomial programs and to ``tame the non-determinism of the original analysis were implemented following recent progresses [3], but required particular care to accommodate the growing complexity of the analysis. The Python source code is intensively documented, and our numerous example files encompass the original examples as well as multiple test cases. A pip package should make it easy to install pymwp on any plat-form, but an on-line demo is also available for convenience.



قيم البحث

اقرأ أيضاً

151 - Azer Bestavros 2011
We define a domain-specific language (DSL) to inductively assemble flow networks from small networks or modules to produce arbitrarily large ones, with interchangeable functionally-equivalent parts. Our small networks or modules are small only as the building blocks in this inductive definition (there is no limit on their size). Associated with our DSL is a type theory, a system of formal annotations to express desirable properties of flow networks together with rules that enforce them as invariants across their interfaces, i.e, the rules guarantee the properties are preserved as we build larger networks from smaller ones. A prerequisite for a type theory is a formal semantics, i.e, a rigorous definition of the entities that qualify as feasible flows through the networks, possibly restricted to satisfy additional efficiency or safety requirements. This can be carried out in one of two ways, as a denotational semantics or as an operational (or reduction) semantics; we choose the first in preference to the second, partly to avoid exponential-growth rewriting in the operational approach. We set up a typing system and prove its soundness for our DSL.
We improve and refine a method for certifying that the values sizes computed by an imperative program will be bounded by polynomials in the programs inputs sizes. Our work tames the non-determinism of the original analysis, and offers an innovative w ay of completing the analysis when a non-polynomial growth is found. We furthermore enrich the analyzed language by adding function definitions and calls, allowing to compose the analysis of different libraries and offering generally more modularity. The implementation of our improved method, discussed in a tool paper (https://hal.archives-ouvertes.fr/hal-03269121), also required to reason about the efficiency of some of the needed operations on the matrices produced by the analysis. It is our hope that this work will enable and facilitate static analysis of source code to guarantee its correctness with respect to resource usages.
Provenance is an increasing concern due to the ongoing revolution in sharing and processing scientific data on the Web and in other computer systems. It is proposed that many computer systems will need to become provenance-aware in order to provide s atisfactory accountability, reproducibility, and trust for scientific or other high-value data. To date, there is not a consensus concerning appropriate formal models or security properties for provenance. In previous work, we introduced a formal framework for provenance security and proposed formal definitions of properties called disclosure and obfuscation. In this article, we study refined notions of positive and negative disclosure and obfuscation in a concrete setting, that of a general-purpose programing language. Previous models of provenance have focused on special-purpose languages such as workflows and database queries. We consider a higher-order, functional language with sums, products, and recursive types and functions, and equip it with a tracing semantics in which traces themselves can be replayed as computations. We present an annotation-propagation framework that supports many provenance views over traces, including standard forms of provenance studied previously. We investigate some relationships among provenance views and develop some partial solutions to the disclosure and obfuscation problems, including correct algorithms for disclosure and positive obfuscation based on trace slicing.
We introduce a type and effect system, for an imperative object calculus, which infers sharing possibly introduced by the evaluation of an expression, represented as an equivalence relation among its free variables. This direct representation of shar ing effects at the syntactic level allows us to express in a natural way, and to generalize, widely-used notions in literature, notably uniqueness and borrowing. Moreover, the calculus is pure in the sense that reduction is defined on language terms only, since they directly encode store. The advantage of this non-standard execution model with respect to a behaviourally equivalent standard model using a global auxiliary structure is that reachability relations among references are partly encoded by scoping.
We report on an inversion tool for a class of oriented conditional constructor term rewriting systems. Four well-behaved rule inverters ranging from trivial to full, partial and semi-inverters are included. Conditional term rewriting systems are theo retically well founded and can model functional and non-functional rewrite relations. We illustrate the inversion by experiments with full and partial
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا