No Arabic abstract
We present an imperative object calculus where types are annotated with qualifiers for aliasing and mutation control. There are two key novelties with respect to similar proposals. First, the type system is very expressive. Notably, it adopts the recovery approach, that is, using the type context to justify strengthening types, greatly improving its power by permitting to recover uniqueness and immutability properties even in presence of other references. This is achieved by rules which restrict the use of such other references in the portion of code which is recovered. Second, execution is modeled by a non standard operational model, where properties of qualifiers can be directly expressed on source terms, rather than as invariants on an auxiliary structure which mimics physical memory. Formally, this is achieved by the block construct, introducing local variable declarations, which, when evaluated, play the role of store.
Gradually typed languages are designed to support both dynamically typed and statically typed programming styles while preserving the benefits of each. While existing gradual type soundness theorems for these languages aim to show that type-based reasoning is preserved when moving from the fully static setting to a gradual one, these theorems do not imply that correctness of type-based refactorings and optimizations is preserved. Establishing correctness of program transformations is technically difficult, and is often neglected in the metatheory of gradual languages. In this paper, we propose an axiomatic account of program equivalence in a gradual cast calculus, which we formalize in a logic we call gradual type theory (GTT). Based on Levys call-by-push-value, GTT gives an axiomatic account of both call-by-value and call-by-name gradual languages. We then prove theorems that justify optimizations and refactorings in gradually typed languages. For example, uniqueness principles for gradual type connectives show that if the $betaeta$ laws hold for a connective, then casts between that connective must be equivalent to the lazy cast semantics. Contrapositively, this shows that eager cast semantics violates the extensionality of function types. As another example, we show that gradual upcasts are pure and dually, gradual downcasts are strict. We show the consistency and applicability of our theory by proving that an implementation using the lazy cast semantics gives a logical relations model of our type theory, where equivalence in GTT implies contextual equivalence of the programs. Since GTT also axiomatizes the dynamic gradual guarantee, our model also establishes this central theorem of gradual typing. The model is parametrized by the implementation of the dynamic types, and so gives a family of implementations that validate type-based optimization and the gradual guarantee.
Quantum computation is a topic of significant recent interest, with practical advances coming from both research and industry. A major challenge in quantum programming is dealing with errors (quantum noise) during execution. Because quantum resources (e.g., qubits) are scarce, classical error correction techniques applied at the level of the architecture are currently cost-prohibitive. But while this reality means that quantum programs are almost certain to have errors, there as yet exists no principled means to reason about erroneous behavior. This paper attempts to fill this gap by developing a semantics for erroneous quantum while-programs, as well as a logic for reasoning about them. This logic permits proving a property we have identified, called $epsilon$-robustness, which characterizes possible distance between an ideal program and an erroneous one. We have proved the logic sound, and showed its utility on several case studies, notably: (1) analyzing the robustness of noi
Gradually typed languages allow statically typed and dynamically typed code to interact while maintaining benefits of both styles. The key to reasoning about these mixed programs is Siek-Vitousek-Cimini-Boylands (dynamic) gradual guarantee, which says that giving components of a program more precise types only adds runtime type checking, and does not otherwise change behavior. In this paper, we give a semantic reformulation of the gradual guarantee called graduality. We change the name to promote the analogy that graduality is to gradual typing what parametricity is to polymorphism. Each gives a local-to-global, syntactic-to-semantic reasoning principle that is formulated in terms of a kind of observational approximation. Utilizing the analogy, we develop a novel logical relation for proving graduality. We show that embedding-projection pairs (ep pairs) are to graduality what relations are to parametricity. We argue that casts between two types where one is more dynamic (less precise) than the other necessarily form an ep pair, and we use this to cleanly prove the graduality cases for casts from the ep-pair property. To construct ep pairs, we give an analysis of the type dynamism relation (also known as type precision or naive subtyping) that interprets the rules for type dynamism as compositional constructions on ep pairs, analogous to the coercion interpretation of subtyping.
This paper describes a deductive approach to synthesizing imperative programs with pointers from declarative specifications expressed in Separation Logic. Our synthesis algorithm takes as input a pair of assertions---a pre- and a postcondition---which describe two states of the symbolic heap, and derives a program that transforms one state into the other, guided by the shape of the heap. The program synthesis algorithm rests on the novel framework of Synthetic Separation Logic (SSL), which generalises the classical notion of heap entailment $mathcal{P} vdash mathcal{Q}$ to incorporate a possibility of transforming a heap satisfying an assertion $mathcal{P}$ into a heap satisfying an assertion $mathcal{Q}$. A synthesized program represents a proof term for a transforming entailment statement $mathcal{P} leadsto mathcal{Q}$, and the synthesis procedure corresponds to a proof search. The derived programs are, thus, correct by construction, in the sense that they satisfy the ascribed pre/postconditions, and are accompanied by complete proof derivations, which can be checked independently. We have implemented a proof search engine for SSL in a form the program synthesizer called SuSLik. For efficiency, the engine exploits properties of SSL rules, such as invertibility and commutativity of rule applications on separate heaps, to prune the space of derivations it has to consider. We explain and showcase the use of SSL on characteristic examples, describe the design of SuSLik, and report on our experience of using it to synthesize a series of benchmark programs manipulating heap-based linked data structures.
It is a strength of graph-based data formats, like RDF, that they are very flexible with representing data. To avoid run-time errors, program code that processes highly-flexible data representations exhibits the difficulty that it must always include the most general case, in which attributes might be set-valued or possibly not available. The Shapes Constraint Language (SHACL) has been devised to enforce constraints on otherwise random data structures. We present our approach, Type checking using SHACL (TyCuS), for type checking code that queries RDF data graphs validated by a SHACL shape graph. To this end, we derive SHACL shapes from queries and integrate data shapes and query shapes as types into a $lambda$-calculus. We provide the formal underpinnings and a proof of type safety for TyCuS. A programmer can use our method in order to process RDF data with simplified, type checked code that will not encounter run-time errors (with usual exceptions as type checking cannot prevent accessing empty lists).