ترغب بنشر مسار تعليمي؟ اضغط هنا

Exchange-Repairs: Managing Inconsistency in Data Exchange

82   0   0.0 ( 0 )
 نشر من قبل Richard Halpert
 تاريخ النشر 2015
  مجال البحث الهندسة المعلوماتية
والبحث باللغة English
 تأليف Balder ten Cate




اسأل ChatGPT حول البحث

In a data exchange setting with target constraints, it is often the case that a given source instance has no solutions. In such cases, the semantics of target queries trivialize. The aim of this paper is to introduce and explore a new framework that gives meaningful semantics in such cases by using the notion of exchange-repairs. Informally, an exchange-repair of a source instance is another source instance that differs minimally from the first, but has a solution. Exchange-repairs give rise to a natural notion of exchange-repair certain answers (XR-certain answers) for target queries. We show that for schema mappings specified by source-to-target GAV dependencies and target equality-generating dependencies (egds), the XR-certain answers of a target conjunctive query can be rewritten as the consistent answers (in the sense of standard database repairs) of a union of conjunctive queries over the source schema with respect to a set of egds over the source schema, making it possible to use a consistent query-answering system to compute XR-certain answers in data exchange. We then examine the general case of schema mappings specified by source-to-target GLAV constraints, a weakly acyclic set of target tgds and a set of target egds. The main result asserts that, for such settings, the XR-certain answers of conjunctive queries can be rewritten as the certain answers of a union of conjunctive queries with respect to the stable models of a disjunctive logic program over a suitable expansion of the source schema.



قيم البحث

اقرأ أيضاً

Data exchange is the problem of transforming data that is structured under a source schema into data structured under another schema, called the target schema, so that both the source and target data satisfy the relationship between the schemas. Even though the formal framework of data exchange for relational database systems is well-established, it does not immediately carry over to the settings of temporal data, which necessitates reasoning over unbounded periods of time. In this work, we study data exchange for temporal data. We first motivate the need for two views of temporal data: the concrete view, which depicts how temporal data is compactly represented and on which the implementations are based, and the abstract view, which defines the semantics of temporal data as a sequence of snapshots. We first extend the chase procedure for the abstract view to have a conceptual basis for the data exchange for temporal databases. Considering non-temporal source-to-target tuple generating dependencies and equality generating dependencies, the chase algorithm can be applied on each snapshot independently. Then we define a chase procedure (called c-chase) on concrete instances and show the result of c-chase on a concrete instance is semantically aligned with the result of chase on the corresponding abstract instance. In order to interpret intervals as constants while checking if a dependency or a query is satisfied by a concrete database, we will normalize the instance with respect to the dependency or the query. To obtain the semantic alignment, the nulls in the concrete view are annotated with temporal information. Furthermore, we show that the result of the concrete chase provides a foundation for query answering. We define naive evaluation on the result of the c-chase and show it produces certain answers.
This paper addresses the problem of representing the set of repairs of a possibly inconsistent database by means of a disjunctive database. Specifically, the class of denial constraints is considered. We show that, given a database and a set of denia l constraints, there exists a (unique) disjunctive database, called canonical, which represents the repairs of the database w.r.t. the constraints and is contained in any other disjunctive database with the same set of minimal models. We propose an algorithm for computing the canonical disjunctive database. Finally, we study the size of the canonical disjunctive database in the presence of functional dependencies for both repairs and cardinality-based repairs.
Variability inherently exists in databases in various contexts which creates database variants. For example, variants of a database could have different schemas/content (database evolution problem), variants of a database could root from different so urces (data integration problem), variants of a database could be deployed differently for specific application domain (deploying a database for different configurations of a software system), etc. Unfortunately, while there are specific solutions to each of the problems arising in these contexts, there is no general solution that accounts for variability in databases and addresses managing variability within a database. In this paper, we formally define variational databases (VDBs) and statically-typed variational relational algebra (VRA) to query VDBs---both database and queries explicitly account for variation. We also design and implement variational database management system (VDBMS) to run variational queries over a VDB effectively and efficiently. To assess this, we generate two VDBs from real-world databases in the context of software development and database evolution with a set of experimental queries for each.
We present a stochastic analysis of a data set consisiting of 10^6 quotes of the US Doller - German Mark exchange rate. Evidence is given that the price changes x(tau) upon different delay times tau can be described as a Markov process evolving in ta u. Thus, the tau-dependence of the probability density function (pdf) p(x) on the delay time tau can be described by a Fokker-Planck equation, a gerneralized diffusion equation for p(x,tau). This equation is completely determined by two coefficients D_{1}(x,tau) and D_{2}(x,tau) (drift- and diffusion coefficient, respectively). We demonstrate how these coefficients can be estimated directly from the data without using any assumptions or models for the underlying stochastic process. Furthermore, it is shown that the solutions of the resulting Fokker-Planck equation describe the empirical pdfs correctly, including the pronounced tails.
This paper describes the OI Exchange Format, a standard for exchanging calibrated data from optical (visible/infrared) stellar interferometers. The standard is based on the Flexible Image Transport System (FITS), and supports storage of the optical i nterferometric observables including squared visibility and closure phase -- data products not included in radio interferometry standards such as UV-FITS. The format has already gained the support of most currently-operating optical interferometer projects, including COAST, NPOI, IOTA, CHARA, VLTI, PTI, and the Keck Interferometer, and is endorsed by the IAU Working Group on Optical Interferometry. Software is available for reading, writing and merging OI Exchange Format files.
التعليقات
جاري جلب التعليقات جاري جلب التعليقات
سجل دخول لتتمكن من متابعة معايير البحث التي قمت باختيارها
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا