Do you want to publish a course? Click here

Magic Sets for Disjunctive Datalog Programs

170   0   0.0 ( 0 )
 Added by Mario Alviano
 Publication date 2012
and research's language is English




Ask ChatGPT about the research

In this paper, a new technique for the optimization of (partially) bound queries over disjunctive Datalog programs with stratified negation is presented. The technique exploits the propagation of query bindings and extends the Magic Set (MS) optimization technique. An important feature of disjunctive Datalog is nonmonotonicity, which calls for nondeterministic implementations, such as backtracking search. A distinguishing characteristic of the new method is that the optimization can be exploited also during the nondeterministic phase. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query under these assumptions. This allows for dynamic pruning of the search space. In contrast, the effect of the previously defined MS methods for disjunctive Datalog is limited to the deterministic portion of the process. In this way, the potential performance gain by using the proposed method can be exponential, as could be observed empirically. The correctness of MS is established thanks to a strong relationship between MS and unfounded sets that has not been studied in the literature before. This knowledge allows for extending the method also to programs with stratified negation in a natural way. The proposed method has been implemented in DLV and various experiments have been conducted. Experimental results on synthetic data confirm the utility of MS for disjunctive Datalog, and they highlight the computational gain that may be obtained by the new method w.r.t. the previously proposed MS methods for disjunctive Datalog programs. Further experiments on real-world data show the benefits of MS within an application scenario that has received considerable attention in recent years, the problem of answering user queries over possibly inconsistent databases originating from integration of autonomous sources of information.



rate research

Read More

Description Logic Programs (dl-programs) proposed by Eiter et al. constitute an elegant yet powerful formalism for the integration of answer set programming with description logics, for the Semantic Web. In this paper, we generalize the notions of completion and loop formulas of logic programs to description logic programs and show that the answer sets of a dl-program can be precisely captured by the models of its completion and loop formulas. Furthermore, we propose a new, alternative semantics for dl-programs, called the {em canonical answer set semantics}, which is defined by the models of completion that satisfy what are called canonical loop formulas. A desirable property of canonical answer sets is that they are free of circular justifications. Some properties of canonical answer sets are also explored.
Dempster-Shafer Theory (DST) generalizes Bayesian probability theory, offering useful additional information, but suffers from a high computational burden. A lot of work has been done to reduce the complexity of computations used in information fusion with Dempsters rule. Yet, few research had been conducted to reduce the complexity of computations for the conjunctive and disjunctive decompositions of evidence, which are at the core of other important methods of information fusion. In this paper, we propose a method designed to exploit the actual evidence (information) contained in these decompositions in order to compute them. It is based on a new notion that we call focal point, derived from the notion of focal set. With it, we are able to reduce these computations up to a linear complexity in the number of focal sets in some cases. In a broader perspective, our formulas have the potential to be tractable when the size of the frame of discernment exceeds a few dozen possible states, contrary to the existing litterature. This article extends (and translates) our work published at the french conference GRETSI in 2019.
In this note we consider the problem of introducing variables in temporal logic programs under the formalism of Temporal Equilibrium Logic (TEL), an extension of Answer Set Programming (ASP) for dealing with linear-time modal operators. To this aim, we provide a definition of a first-order version of TEL that shares the syntax of first-order Linear-time Temporal Logic (LTL) but has a different semantics, selecting some LTL models we call temporal stable models. Then, we consider a subclass of theories (called splittable temporal logic programs) that are close to usual logic programs but allowing a restricted use of temporal operators. In this setting, we provide a syntactic definition of safe variables that suffices to show the property of domain independence -- that is, addition of arbitrary elements in the universe does not vary the set of temporal stable models. Finally, we present a method for computing the derivable facts by constructing a non-temporal logic program with variables that is fed to a standard ASP grounder. The information provided by the grounder is then used to generate a subset of ground temporal rules which is equivalent to (and generally smaller than) the full program instantiation.
In human-aware planning, a planning agent may need to provide an explanation to a human user on why its plan is optimal. A popular approach to do this is called model reconciliation, where the agent tries to reconcile the differences in its model and the humans model such that the plan is also optimal in the humans model. In this paper, we present a logic-based framework for model reconciliation that extends beyond the realm of planning. More specifically, given a knowledge base $KB_1$ entailing a formula $varphi$ and a second knowledge base $KB_2$ not entailing it, model reconciliation seeks an explanation, in the form of a cardinality-minimal subset of $KB_1$, whose integration into $KB_2$ makes the entailment possible. Our approach, based on ideas originating in the context of analysis of inconsistencies, exploits the existing hitting set duality between minimal correction sets (MCSes) and minimal unsatisfiable sets (MUSes) in order to identify an appropriate explanation. However, differently from those works targeting inconsistent formulas, which assume a single knowledge base, MCSes and MUSes are computed over two distinct knowledge bases. We conclude our paper with an empirical evaluation of the newly introduced approach on planning instances, where we show how it outperforms an existing state-of-the-art solver, and generic non-planning instances from recent SAT competitions, for which no other solver exists.
In many scenarios, complete and incomplete information coexist. For this reason, the knowledge representation and database communities have long shown interest in simultaneously supporting the closed- and the open-world views when reasoning about logic theories. Here we consider the setting of querying possibly incomplete data using logic theories, formalized as the evaluation of an ontology-mediated query (OMQ) that pairs a query with a theory, sometimes called an ontology, expressing background knowledge. This can be further enriched by specifying a set of closed predicates from the theory that are to be interpreted under the closed-world assumption, while the rest are interpreted with the open-world view. In this way we can retrieve more precise answers to queries by leveraging the partial completeness of the data. The central goal of this paper is to understand the relative expressiveness of OMQ languages in which the ontology is written in the expressive Description Logic (DL) ALCHOI and includes a set of closed predicates. We consider a restricted class of conjunctive queries. Our main result is to show that every query in this non-monotonic query language can be translated in polynomial time into Datalog with negation under the stable model semantics. To overcome the challenge that Datalog has no direct means to express the existential quantification present in ALCHOI, we define a two-player game that characterizes the satisfaction of the ontology, and design a Datalog query that can decide the existence of a winning strategy for the game. If there are no closed predicates, that is in the case of querying a plain ALCHOI knowledge base, our translation yields a positive disjunctive Datalog program of polynomial size. To the best of our knowledge, unlike previous translations for related fragments with expressive (non-Horn) DLs, these are the first polynomial time translations.

suggested questions

comments
Fetching comments Fetching comments
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا