Do you want to publish a course? Click here

Just-in-Time Learning for Bottom-Up Enumerative Synthesis

59   0   0.0 ( 0 )
 Added by Shraddha Barke
 Publication date 2020
and research's language is English




Ask ChatGPT about the research

A key challenge in program synthesis is the astronomical size of the search space the synthesizer has to explore. In response to this challenge, recent work proposed to guide synthesis using learned probabilistic models. Obtaining such a model, however, might be infeasible for a problem domain where no high-quality training data is available. In this work we introduce an alternative approach to guided program synthesis: instead of training a model ahead of time we show how to bootstrap one just in time, during synthesis, by learning from partial solutions encountered along the way. To make the best use of the model, we also propose a new program enumeration algorithm we dub guided bottom-up search, which extends the efficient bottom-up search with guidance from probabilistic models. We implement this approach in a tool called Probe, which targets problems in the popular syntax-guided synthesis (SyGuS) format. We evaluate Probe on benchmarks from the literature and show that it achieves significant performance gains both over unguided bottom-up search and over a state-of-the-art probability-guided synthesizer, which had been trained on a corpus of existing solutions. Moreover, we show that these performance gains do not come at the cost of solution quality: programs generated by Probe are only slightly more verbose than the shortest solutions and perform no unnecessary case-splitting.



rate research

Read More

We present a novel bottom-up method for the synthesis of functional recursive programs. While bottom-up synthesis techniques can work better than top-down methods in certain settings, there is no prior technique for synthesizing recursive programs from logical specifications in a purely bottom-up fashion. The main challenge is that effective bottom-up methods need to execute sub-expressions of the code being synthesized, but it is impossible to execute a recursive subexpression of a program that has not been fully constructed yet. In this paper, we address this challenge using the concept of angelic semantics. Specifically, our method finds a program that satisfies the specification under angelic semantics (we refer to this as angelic synthesis), analyzes the assumptions made during its angelic execution, uses this analysis to strengthen the specification, and finally reattempts synthesis with the strengthened specification. Our proposed angelic synthesis algorithm is based on version space learning and therefore deals effectively with many incremental synthesis calls made during the overall algorithm. We have implemented this approach in a prototype called Burst and evaluate it on synthesis problems from prior work. Our experiments show that Burst is able to synthesize a solution to 95% of the benchmarks in our benchmark suite, outperforming prior work.
Nanosize pores can turn semimetallic graphene into a semiconductor and from being impermeable into the most efficient molecular sieve membrane. However, scaling the pores down to the nanometer, while fulfilling the tight structural constraints imposed by applications, represents an enormous challenge for present top-down strategies. Here we report a bottom-up method to synthesize nanoporous graphene comprising an ordered array of pores separated by ribbons, which can be tuned down to the one nanometer range. The size, density, morphology and chemical composition of the pores are defined with atomic precision by the design of the molecular precursors. Our measurements further reveal a highly anisotropic electronic structure, where orthogonal one-dimensional electronic bands with an energy gap of ~1 eV coexist with confined pore states, making the nanoporous graphene a highly versatile semiconductor for simultaneous sieving and electrical sensing of molecular species.
Fluorescent nanoparticles are widely utilized in a large range of nanoscale imaging and sensing applications. While ultra-small nanoparticles (size <10 nm) are highly desirable, at this size range their photostability can be compromised due to effects such as intensity fluctuation and spectral diffusion caused by interaction with surface states. In this letter, we demonstrate a facile, bottom-up technique for the fabrication of sub-10-nm hBN nanoparticles hosting photostable bright emitters via a catalyst-free hydrothermal reaction between boric acid and melamine. We also implement a simple stabilization protocol that significantly reduces intensity fluctuation by ~85% and narrows the emission linewidth by ~14% by employing a common sol-gel silica coating process. Our study advances a promising strategy for the scalable, bottom-up synthesis of high-quality quantum emitters in hBN nanoparticles.
We address the problem of discovering 3D parts for objects in unseen categories. Being able to learn the geometry prior of parts and transfer this prior to unseen categories pose fundamental challenges on data-driven shape segmentation approaches. Formulated as a contextual bandit problem, we propose a learning-based agglomerative clustering framework which learns a grouping policy to progressively group small part proposals into bigger ones in a bottom-up fashion. At the core of our approach is to restrict the local context for extracting part-level features, which encourages the generalizability to unseen categories. On the large-scale fine-grained 3D part dataset, PartNet, we demonstrate that our method can transfer knowledge of parts learned from 3 training categories to 21 unseen testing categories without seeing any annotated samples. Quantitative comparisons against four shape segmentation baselines shows that our approach achieve the state-of-the-art performance.
Batching is an essential technique to improve computation efficiency in deep learning frameworks. While batch processing for models with static feed-forward computation graphs is straightforward to implement, batching for dynamic computation graphs such as syntax trees or social network graphs is challenging due to variable computation graph structure across samples. Through simulation and analysis of a Tree-LSTM model, we show the key trade-off between graph analysis time and batching effectiveness in dynamic batching. Based on this finding, we propose a dynamic batching method as an extension to MXNet Gluons just-in-time compilation (JIT) framework. We show empirically that our method yields up to 6.25 times speed-up on a common dynamic workload, a tree-LSTM model for the semantic relatedness task.
comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا