ترغب بنشر مسار تعليمي؟ اضغط هنا

Bottom-up Synthesis of Recursive Functional Programs using Angelic Execution

72   0   0.0 ( 0 )
 نشر من قبل Anders Miltner
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

We present a novel bottom-up method for the synthesis of functional recursive programs. While bottom-up synthesis techniques can work better than top-down methods in certain settings, there is no prior technique for synthesizing recursive programs from logical specifications in a purely bottom-up fashion. The main challenge is that effective bottom-up methods need to execute sub-expressions of the code being synthesized, but it is impossible to execute a recursive subexpression of a program that has not been fully constructed yet. In this paper, we address this challenge using the concept of angelic semantics. Specifically, our method finds a program that satisfies the specification under angelic semantics (we refer to this as angelic synthesis), analyzes the assumptions made during its angelic execution, uses this analysis to strengthen the specification, and finally reattempts synthesis with the strengthened specification. Our proposed angelic synthesis algorithm is based on version space learning and therefore deals effectively with many incremental synthesis calls made during the overall algorithm. We have implemented this approach in a prototype called Burst and evaluate it on synthesis problems from prior work. Our experiments show that Burst is able to synthesize a solution to 95% of the benchmarks in our benchmark suite, outperforming prior work.



قيم البحث

اقرأ أيضاً

A key challenge in program synthesis is the astronomical size of the search space the synthesizer has to explore. In response to this challenge, recent work proposed to guide synthesis using learned probabilistic models. Obtaining such a model, howev er, might be infeasible for a problem domain where no high-quality training data is available. In this work we introduce an alternative approach to guided program synthesis: instead of training a model ahead of time we show how to bootstrap one just in time, during synthesis, by learning from partial solutions encountered along the way. To make the best use of the model, we also propose a new program enumeration algorithm we dub guided bottom-up search, which extends the efficient bottom-up search with guidance from probabilistic models. We implement this approach in a tool called Probe, which targets problems in the popular syntax-guided synthesis (SyGuS) format. We evaluate Probe on benchmarks from the literature and show that it achieves significant performance gains both over unguided bottom-up search and over a state-of-the-art probability-guided synthesizer, which had been trained on a corpus of existing solutions. Moreover, we show that these performance gains do not come at the cost of solution quality: programs generated by Probe are only slightly more verbose than the shortest solutions and perform no unnecessary case-splitting.
Nanosize pores can turn semimetallic graphene into a semiconductor and from being impermeable into the most efficient molecular sieve membrane. However, scaling the pores down to the nanometer, while fulfilling the tight structural constraints impose d by applications, represents an enormous challenge for present top-down strategies. Here we report a bottom-up method to synthesize nanoporous graphene comprising an ordered array of pores separated by ribbons, which can be tuned down to the one nanometer range. The size, density, morphology and chemical composition of the pores are defined with atomic precision by the design of the molecular precursors. Our measurements further reveal a highly anisotropic electronic structure, where orthogonal one-dimensional electronic bands with an energy gap of ~1 eV coexist with confined pore states, making the nanoporous graphene a highly versatile semiconductor for simultaneous sieving and electrical sensing of molecular species.
Program slicing provides explanations that illustrate how program outputs were produced from inputs. We build on an approach introduced in prior work by Perera et al., where dynamic slicing was defined for pure higher-order functional programs as a G alois connection between lattices of partial inputs and partial outputs. We extend this approach to imperative functional programs that combine higher-order programming with references and exceptions. We present proofs of correctness and optimality of our approach and a proof-of-concept implementation and experimental evaluation.
We implement extraction of Coq programs to functional languages based on MetaCoqs certified erasure. We extend the MetaCoq erasure output language with typing information and use it as an intermediate representation, which we call $lambda^T_square$. We complement the extraction functionality with a full pipeline that includes several standard transformations (eta-expansion, inlining, etc) implemented in a proof-generating manner along with a verified optimisation pass removing unused arguments. We prove the pass correct wrt. a conventional call-by-value operational semantics of functional languages. From the optimised $lambda^T_square$ representation, we obtain code in two functional smart contract languages (Liquidity and CameLIGO), the functional language Elm, and a subset of the multi-paradigm language for systems programming Rust. Rust is currently gaining popularity as a language for smart contracts, and we demonstrate how our extraction can be used to extract smart contract code for the Concordium network. The development is done in the context of the ConCert framework that enables smart contract verification. We contribute with two verified real-world smart contracts (boardroom voting and escrow), which we use, among other examples, to exemplify the applicability of the pipeline. In addition, we develop a verified web application and extract it to fully functional Elm code. In total, this gives us a way to write dependently typed programs in Coq, verify, and then extract them to several target languages while retaining a small trusted computing base of only MetaCoq and the pretty-printers into these languages.
This paper describes a deductive approach to synthesizing imperative programs with pointers from declarative specifications expressed in Separation Logic. Our synthesis algorithm takes as input a pair of assertions---a pre- and a postcondition---whic h describe two states of the symbolic heap, and derives a program that transforms one state into the other, guided by the shape of the heap. The program synthesis algorithm rests on the novel framework of Synthetic Separation Logic (SSL), which generalises the classical notion of heap entailment $mathcal{P} vdash mathcal{Q}$ to incorporate a possibility of transforming a heap satisfying an assertion $mathcal{P}$ into a heap satisfying an assertion $mathcal{Q}$. A synthesized program represents a proof term for a transforming entailment statement $mathcal{P} leadsto mathcal{Q}$, and the synthesis procedure corresponds to a proof search. The derived programs are, thus, correct by construction, in the sense that they satisfy the ascribed pre/postconditions, and are accompanied by complete proof derivations, which can be checked independently. We have implemented a proof search engine for SSL in a form the program synthesizer called SuSLik. For efficiency, the engine exploits properties of SSL rules, such as invertibility and commutativity of rule applications on separate heaps, to prune the space of derivations it has to consider. We explain and showcase the use of SSL on characteristic examples, describe the design of SuSLik, and report on our experience of using it to synthesize a series of benchmark programs manipulating heap-based linked data structures.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا