Do you want to publish a course? Click here

Consistent Query Answers in the Presence of Universal Constraints

207   0   0.0 ( 0 )
 Added by Slawomir Staworko
 Publication date 2009
and research's language is English




Ask ChatGPT about the research

The framework of consistent query answers and repairs has been introduced to alleviate the impact of inconsistent data on the answers to a query. A repair is a minimally different consistent instance and an answer is consistent if it is present in every repair. In this article we study the complexity of consistent query answers and repair checking in the presence of universal constraints. We propose an extended version of the conflict hypergraph which allows to capture all repairs w.r.t. a set of universal constraints. We show that repair checking is in PTIME for the class of full tuple-generating dependencies and denial constraints, and we present a polynomial repair algorithm. This algorithm is sound, i.e. always produces a repair, but also complete, i.e. every repair can be constructed. Next, we present a polynomial-time algorithm computing consistent answers to ground quantifier-free queries in the presence of denial constraints, join dependencies, and acyclic full-tuple generating dependencies. Finally, we show that extending the class of constraints leads to intractability. For arbitrary full tuple-generating dependencies consistent query answering becomes coNP-complete. For arbitrary universal constraints consistent query answering is Pi_2^p-complete and repair checking coNP-complete.



rate research

Read More

We consider here the problem of obtaining reliable, consistent information from inconsistent databases -- databases that do not have to satisfy given integrity constraints. We use the notion of consistent query answer -- a query answer which is true in every (minimal) repair of the database. We provide a complete classification of the computational complexity of consistent answers to first-order queries w.r.t. functional dependencies and denial constraints. We show how the complexity depends on the {em type} of the constraints considered, their {em number}, and the {em size} of the query. We obtain several new PTIME cases, using new algorithms.
We investigate the computational complexity of minimizing the source side-effect in order to remove a given number of tuples from the output of a conjunctive query. In particular, given a multi-relational database $D$, a conjunctive query $Q$, and a positive integer $k$ as input, the goal is to find a minimum subset of input tuples to remove from D that would eliminate at least $k$ output tuples from $Q(D)$. This problem generalizes the well-studied deletion propagation problem in databases. In addition, it encapsulates the notion of intervention for aggregate queries used in data analysis with applications to explaining interesting observations on the output. We show a dichotomy in the complexity of this problem for the class of full conjunctive queries without self-joins by giving a characterization on the structure of $Q$ that makes the problem either polynomial-time solvable or NP-hard. Our proof of this dichotomy result already gives an exact algorithm in the easy cases; we complement this by giving an approximation algorithm for the hard cases of the problem.
We investigate the computational complexity of minimizing the source side-effect in order to remove a given number of tuples from the output of a conjunctive query. This is a variant of the well-studied {em deletion propagation} problem, the difference being that we are interested in removing the smallest subset of input tuples to remove a given number of output tuples} while deletion propagation focuses on removing a specific output tuple. We call this the {em Aggregated Deletion Propagation} problem. We completely characterize the poly-time solvability of this problem for arbitrary conjunctive queries without self-joins. This includes a poly-time algorithm to decide solvability, as well as an exact structural characterization of NP-hard instances. We also provide a practical algorithm for this problem (a heuristic for NP-hard instances) and evaluate its experimental performance on real and synthetic datasets.
A consistent query answer in an inconsistent database is an answer obtained in every (minimal) repair. The repairs are obtained by resolving all conflicts in all possible ways. Often, however, the user is able to provide a preference on how conflicts should be resolved. We investigate here the framework of preferred consistent query answers, in which user preferences are used to narrow down the set of repairs to a set of preferred repairs. We axiomatize desirable properties of preferred repairs. We present three different families of preferred repairs and study their mutual relationships. Finally, we investigate the complexity of preferred repairing and computing preferred consistent query answers.
A relational database is inconsistent if it does not satisfy a given set of integrity constraints. Nevertheless, it is likely that most of the data in it is consistent with the constraints. In this paper we apply logic programming based on answer sets to the problem of retrieving consistent information from a possibly inconsistent database. Since consistent information persists from the original database to every of its minimal repairs, the approach is based on a specification of database repairs using disjunctive logic programs with exceptions, whose answer set semantics can be represented and computed by systems that implement stable model semantics. These programs allow us to declare persistence by defaults and repairing changes by exceptions. We concentrate mainly on logic programs for binary integrity constraints, among which we find most of the integrity constraints found in practice.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا