Do you want to publish a course? Click here

Incremental Relational Lenses

119   0   0.0 ( 0 )
 Added by Roly Perera
 Publication date 2018
and research's language is English




Ask ChatGPT about the research

Lenses are a popular approach to bidirectional transformations, a generalisation of the view update problem in databases, in which we wish to make changes to source tables to effect a desired change on a view. However, perhaps surprisingly, lenses have seldom actually been used to implement updatable views in databases. Bohannon, Pierce and Vaughan proposed an approach to updatable views called relational lenses, but to the best of our knowledge this proposal has not been implemented or evaluated to date. We propose incremental relational lenses, that equip relational lenses with change-propagating semantics that map small changes to the view to (potentially) small changes to the source tables. We also present a language-integrated implementation of relational lenses and a detailed experimental evaluation, showing orders of magnitude improvement over the non-incremental approach. Our work shows that relational lenses can be used to support expressive and efficient view updates at the language level, without relying on updatable view support from the underlying database.



rate research

Read More

Optimizing the physical data storage and retrieval of data are two key database management problems. In this paper, we propose a language that can express a wide range of physical database layouts, going well beyond the row- and column-based methods that are widely used in database management systems. We use deductive synthesis to turn a high-level relational representation of a database query into a highly optimized low-level implementation which operates on a specialized layout of the dataset. We build a compiler for this language and conduct experiments using a popular database benchmark, which shows that the performance of these specialized queries is competitive with a state-of-the-art in memory compiled database system.
In this paper, we address the problem of learning low dimension representation of entities on relational databases consisting of multiple tables. Embeddings help to capture semantics encoded in the database and can be used in a variety of settings like auto-completion of tables, fully-neural query processing of relational joins queries, seamlessly handling missing values, and more. Current work is restricted to working with just single table, or using pretrained embeddings over an external corpus making them unsuitable for use in real-world databases. In this work, we look into ways of using these attention-based model to learn embeddings for entities in the relational database. We are inspired by BERT style pretraining methods and are interested in observing how they can be extended for representation learning on structured databases. We evaluate our approach of the autocompletion of relational databases and achieve improvement over standard baselines.
Context-sensitive global analysis of large code bases can be expensive, which can make its use impractical during software development. However, there are many situations in which modifications are small and isolated within a few components, and it is desirable to reuse as much as possible previous analysis results. This has been achieved to date through incremental global analysis fixpoint algorithms that achieve cost reductions at fine levels of granularity, such as changes in program lines. However, these fine-grained techniques are not directly applicable to modular programs, nor are they designed to take advantage of modular structures. This paper describes, implements, and evaluates an algorithm that performs efficient context-sensitive analysis incrementally on modular partitions of programs. The experimental results show that the proposed modular algorithm shows significant improvements, in both time and memory consumption, when compared to existing non-modular, fine-grain incremental analysis techniques. Furthermore, thanks to the proposed inter-modular propagation of analysis information, our algorithm also outperforms traditional modular analysis even when analyzing from scratch.
Language-integrated query is a popular and powerful programming construct allowing database queries and ordinary program code to interoperate seamlessly and safely. Language-integrated query techniques rely on classical results about the nested relational calculus stating that its queries can be algorithmically translated to SQL, as long as their result type is a flat relation. Cooper and others advocated higher-order nested relational calculi as a basis for language-integrated queries in functional languages such as Links and F#. However, the translation of higher-order relational queries to SQL relies on a rewrite system for which no strong normalization proof has been published: a previous proof attempt does not deal correctly with rewrite rules that duplicate subterms. This paper fills the gap in the literature, explaining the difficulty with the previous attempt, and showing how to extend the $toptop$-lifting approach of Lindley and Stark to accommodate duplicating rewrites. We also show how to extend the proof to a recently-introduced calculus for heterogeneous queries mixing set and multiset semantics.
We present the first compositional, incremental static analysis for detecting memory-safety and information leakage vulnerabilities in C-like programs. To do so, we develop the first under-approximate relational program logics for reasoning about information flow, including Insecurity Separation Logic (InsecSL). Like prior under-approximate separation logics, we show that InsecSL can be automated via symbolic execution. We then adapt and extend a prior intra-procedural symbolic execution algorithm to build a bottom-up, inter-procedural and incremental analysis for detecting vulnerabilities. We prove our approach sound in Isabelle/HOL and implement it in a proof-of-concept tool, Underflow, for analysing C programs, which we apply to various case studies.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا