ترغب بنشر مسار تعليمي؟ اضغط هنا

MORTAL: A Tool of Automatically Designing Relational Storage Schemas for Multi-model Data through Reinforcement Learning

62   0   0.0 ( 0 )
 نشر من قبل Gongsheng Yuan
 تاريخ النشر 2021
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English




اسأل ChatGPT حول البحث

Considering relational databases having powerful capabilities in handling security, user authentication, query optimization, etc., several commercial and academic frameworks reuse relational databases to store and query semi-structured data (e.g., XML, JSON) or graph data (e.g., RDF, property graph). However, these works concentrate on managing one of the above data models with RDBMSs. That is, it does not exploit the underlying tools to automatically generate the relational schema for storing multi-model data. In this demonstration, we present a novel reinforcement learning-based tool called MORTAL. Specifically, given multi-model data containing different data models and a set of queries, it could automatically design a relational schema to store these data while having a great query performance. To demonstrate it clearly, we are centered around the following modules: generating initial state based on loaded multi-model data, influencing learning process by setting parameters, controlling generated relational schema through providing semantic constraints, improving the query performance of relational schema by specifying queries, and a highly interactive interface for showing query performance and storage consumption when users adjust the generated relational schema.



قيم البحث

اقرأ أيضاً

How to manage various data in a unified way is a significant research topic in the field of databases. To address this problem, researchers have proposed multi-model databases to support multiple data models in a uniform platform with a single unifie d query language. However, since relational databases are predominant in the current market, it is expensive to replace them with others. Besides, due to the theories and technologies of RDBMSs having been enhanced over decades, it is hard to use few years to develop a multi-model database that can be compared with existing RDBMSs in handling security, query optimization, transaction management, etc. In this paper, we reconsider employing relational databases to store and query multi-model data. Unfortunately, the mismatch between the complexity of multi-model data structure and the simplicity of flat relational tables makes this difficult. Against this challenge, we utilize the reinforcement learning (RL) method to learn a relational schema by interacting with an RDBMS. Instead of using the classic Q-learning algorithm, we propose a variant Q-learning algorithm, called textit{Double Q-tables}, to reduce the dimension of the original Q-table and improve learning efficiency. Experimental results show that our approach could learn a relational schema outperforming the existing multi-model storage schema in terms of query time and space consumption.
We develop a query answering system, where at the core of the work there is an idea of query answering by rewriting. For this purpose we extend the DL DL-Lite with the ability to support n-ary relations, obtaining the DL DLR-Lite, which is still poly nomial in the size of the data. We devise a flexible way of mapping the conceptual level to the relational level, which provides the users an SQL-like query language over the conceptual schema. The rewriting technique adds value to conventional query answering techniques, allowing to formulate simpler queries, with the ability to infer additional information that was not stated explicitly in the user query. The formalization of the conceptual schema and the developed reasoning technique allow checking for consistency between the database and the conceptual schema, thus improving the trustiness of the information system.
This tutorial overviews the state of the art in learning models over relational databases and makes the case for a first-principles approach that exploits recent developments in database research. The input to learning classification and regression models is a training dataset defined by feature extraction queries over relational databases. The mainstream approach to learning over relational data is to materialize the training dataset, export it out of the database, and then learn over it using a statistical package. This approach can be expensive as it requires the materialization of the training dataset. An alternative approach is to cast the machine learning problem as a database problem by transforming the data-intensive component of the learning task into a batch of aggregates over the feature extraction query and by computing this batch directly over the input database. The tutorial highlights a variety of techniques developed by the database theory and systems communities to improve the performance of the learning task. They rely on structural properties of the relational data and of the feature extraction query, including algebraic (semi-ring), combinatorial (hypertree width), statistical (sampling), or geometric (distance) structure. They also rely on factorized computation, code specialization, query compilation, and parallelization.
319 - Sudeepa Roy , Babak Salimi 2017
The study of causality or causal inference - how much a given treatment causally affects a given outcome in a population - goes way beyond correlation or association analysis of variables, and is critical in making sound data driven decisions and pol icies in a multitude of applications. The gold standard in causal inference is performing controlled experiments, which often is not possible due to logistical or ethical reasons. As an alternative, inferring causality on observational data based on the Neyman-Rubin potential outcome model has been extensively used in statistics, economics, and social sciences over several decades. In this paper, we present a formal framework for sound causal analysis on observational datasets that are given as multiple relations and where the population under study is obtained by joining these base relations. We study a crucial condition for inferring causality from observational data, called the strong ignorability assumption (the treatment and outcome variables should be independent in the joined relation given the observed covariates), using known conditional independences that hold in the base relations. We also discuss how the structure of the conditional independences in base relations given as graphical models help infer new conditional independences in the joined relation. The proposed framework combines concepts from databases, statistics, and graphical models, and aims to initiate new research directions spanning these fields to facilitate powerful data-driven decisions in todays big data world.
F-IVM is a system for real-time analytics such as machine learning applications over training datasets defined by queries over fast-evolving relational databases. We will demonstrate F-IVM for three such applications: model selection, Chow-Liu trees, and ridge linear regression.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا