Do you want to publish a course? Click here

ML Based Lineage in Databases

446   0   0.0 ( 0 )
 Added by Michael Leybovich
 Publication date 2021
and research's language is English




Ask ChatGPT about the research

In this work, we track the lineage of tuples throughout their database lifetime. That is, we consider a scenario in which tuples (records) that are produced by a query may affect other tuple insertions into the DB, as part of a normal workflow. As time goes on, exact provenance explanations for such tuples become deeply nested, increasingly consuming space, and resulting in decreased clarity and readability. We present a novel approach for approximating lineage tracking, using a Machine Learning (ML) and Natural Language Processing (NLP) technique; namely, word embedding. The basic idea is summarizing (and approximating) the lineage of each tuple via a small set of constant-size vectors (the number of vectors per-tuple is a hyperparameter). Therefore, our solution does not suffer from space complexity blow-up over time, and it naturally ranks explanations to the existence of a tuple. We devise an alternative and improved lineage tracking mechanism, that of keeping track of and querying lineage at the column level; thereby, we manage to better distinguish between the provenance features and the textual characteristics of a tuple. We integrate our lineage computations into the PostgreSQL system via an extension (ProvSQL) and experimentally exhibit useful results in terms of accuracy against exact, semiring-based, justifications. In the experiments, we focus on tuples with multiple generations of tuples in their lifelong lineage and analyze them in terms of direct and distant lineage. The experiments suggest a high usefulness potential for the proposed approximate lineage methods and the further suggested enhancements. This especially holds for the column-based vectors method which exhibits high precision and high per-level recall.



rate research

Read More

We study here the impact of priorities on conflict resolution in inconsistent relational databases. We extend the framework of repairs and consistent query answers. We propose a set of postulates that an extended framework should satisfy and consider two instantiations of the framework: (locally preferred) l-repairs and (globally preferred) g-repairs. We study the relationships between them and the impact each notion of repair has on the computational complexity of repair checking and consistent query answers.
The broadening adoption of machine learning in the enterprise is increasing the pressure for strict governance and cost-effective performance, in particular for the common and consequential steps of model storage and inference. The RDBMS provides a natural starting point, given its mature infrastructure for fast data access and processing, along with support for enterprise features (e.g., encryption, auditing, high-availability). To take advantage of all of the above, we need to address a key concern: Can in-RDBMS scoring of ML models match (outperform?) the performance of dedicated frameworks? We answer the above positively by building Raven, a system that leverages native integration of ML runtimes (i.e., ONNX Runtime) deep within SQL Server, and a unified intermediate representation (IR) to enable advanced cross-optimizations between ML and DB operators. In this optimization space, we discover the most exciting research opportunities that combine DB/Compiler/ML thinking. Our initial evaluation on real data demonstrates performance gains of up to 5.5x from the native integration of ML in SQL Server, and up to 24x from cross-optimizations--we will demonstrate Raven live during the conference talk.
One of the most important aspects of security organization is to establish a framework to identify security significant points where policies and procedures are declared. The (information) security infrastructure comprises entities, processes, and technology. All are participants in handling information, which is the item that needs to be protected. Privacy and security information technology is a critical and unmet need in the management of personal information. This paper proposes concepts and technologies for management of personal information. Two different types of information can be distinguished: personal information and nonpersonal information. Personal information can be either personal identifiable information (PII), or nonidentifiable information (NII). Security, policy, and technical requirements can be based on this distinction. At the conceptual level, PII is defined and formalized by propositions over infons (discrete pieces of information) that specify transformations in PII and NII. PII is categorized into simple infons that reflect the proprietor s aspects, relationships with objects, and relationships with other proprietors. The proprietor is the identified person about whom the information is communicated. The paper proposes a database organization that focuses on the PII spheres of proprietors. At the design level, the paper describes databases of personal identifiable information built exclusively for this type of information, with their own conceptual scheme, system management, and physical structure.
Variability inherently exists in databases in various contexts which creates database variants. For example, variants of a database could have different schemas/content (database evolution problem), variants of a database could root from different sources (data integration problem), variants of a database could be deployed differently for specific application domain (deploying a database for different configurations of a software system), etc. Unfortunately, while there are specific solutions to each of the problems arising in these contexts, there is no general solution that accounts for variability in databases and addresses managing variability within a database. In this paper, we formally define variational databases (VDBs) and statically-typed variational relational algebra (VRA) to query VDBs---both database and queries explicitly account for variation. We also design and implement variational database management system (VDBMS) to run variational queries over a VDB effectively and efficiently. To assess this, we generate two VDBs from real-world databases in the context of software development and database evolution with a set of experimental queries for each.
A new type of logs, the command log, is being employed to replace the traditional data log (e.g., ARIES log) in the in-memory databases. Instead of recording how the tuples are updated, a command log only tracks the transactions being executed, thereby effectively reducing the size of the log and improving the performance. Command logging on the other hand increases the cost of recovery, because all the transactions in the log after the last checkpoint must be completely redone in case of a failure. In this paper, we first extend the command logging technique to a distributed environment, where all the nodes can perform recovery in parallel. We then propose an adaptive logging approach by combining data logging and command logging. The percentage of data logging versus command logging becomes an optimization between the performance of transaction processing and recovery to suit different OLTP applications. Our experimental study compares the performance of our proposed adaptive logging, ARIES-style data logging and command logging on top of H-Store. The results show that adaptive logging can achieve a 10x boost for recovery and a transaction throughput that is comparable to that of command logging.

suggested questions

comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا