ترغب بنشر مسار تعليمي؟ اضغط هنا

Structural decomposition methods have been developed for identifying tractable classes of instances of fundamental problems in databases, such as conjunctive queries and query containment, of the constraint satisfaction problem in artificial intellig ence, or more generally of the homomorphism problem over relational structures. Most structural decomposition methods can be characterized through hypergraph games that are variations of the Robber and Cops graph game that characterizes the notion of treewidth. In particular, decomposition trees somehow correspond to monotone winning strategies, where the escape space of the robber on the hypergraph is shrunk monotonically by the cops. In fact, unlike the treewidth case, there are hypergraphs where monotonic strategies do not exist, while the robber can be captured by means of more complex non-monotonic strategies. However, these powerful strategies do not correspond in general to valid decompositions. The paper provides a general way to exploit the power of non-monotonic strategies, by allowing a disciplined form of non-monotonicity, characteristic of cops playing in a greedy way. It is shown that deciding the existence of a (non-monotone) greedy winning strategy (and compute one, if any) is tractable. Moreover, despite their non-monotonicity, such strategies always induce valid decomposition trees, which can be computed efficiently based on them. As a consequence, greedy strategies allow us to define new islands of tractability for the considered problems properly including all previously known classes of tractable instances.
Coalitional games serve the purpose of modeling payoff distribution problems in scenarios where agents can collaborate by forming coalitions in order to obtain higher worths than by acting in isolation. In the classical Transferable Utility (TU) sett ing, coalition worths can be freely distributed amongst agents. However, in several application scenarios, this is not the case and the Non-Transferable Utility setting (NTU) must be considered, where additional application-oriented constraints are imposed on the possible worth distributions. In this paper, an approach to define NTU games is proposed which is based on describing allowed distributions via a set of mixed-integer linear constraints applied to an underlying TU game. It is shown that such games allow non-transferable conditions on worth distributions to be specified in a natural and succinct way. The properties and the relationships among the most prominent solution concepts for NTU games that hold when they are applied on (mixed-integer) constrained games are investigated. Finally, a thorough analysis is carried out to assess the impact of issuing constraints on the computational complexity of some of these solution concepts.
Tree projections provide a mathematical framework that encompasses all the various (purely) structural decomposition methods that have been proposed in the literature to single out classes of nearly-acyclic (hyper)graphs, such as the tree decompositi on method, which is the most powerful decomposition method on graphs, and the (generalized) hypertree decomposition method, which is its natural counterpart on arbitrary hypergraphs. The paper analyzes this framework, by focusing in particular on minimal tree projections, that is, on tree projections without useless redundancies. First, it is shown that minimal tree projections enjoy a number of properties that are usually required for normal form decompositions in various structural decomposition methods. In particular, they enjoy the same kind of connection properties as (minimal) tree decompositions of graphs, with the result being tight in the light of the negative answer that is provided to the open question about whether they enjoy a slightly stronger notion of connection property, defined to speed-up the computation of hypertree decompositions. Second, it is shown that tree projections admit a natural game-theoretic characterization in terms of the Captain and Robber game. In this game, as for the Robber and Cops game characterizing tree decompositions, the existence of winning strategies implies the existence of monotone ones. As a special case, the Captain and Robber game can be used to characterize the generalized hypertree decomposition method, where such a game-theoretic characterization was missing and asked for. Besides their theoretical interest, these results have immediate algorithmic applications both for the general setting and for structural decomposition methods that can be recast in terms of tree projections.
Several variants of the Constraint Satisfaction Problem have been proposed and investigated in the literature for modelling those scenarios where solutions are associated with some given costs. Within these frameworks computing an optimal solution is an NP-hard problem in general; yet, when restricted over classes of instances whose constraint interactions can be modelled via (nearly-)acyclic graphs, this problem is known to be solvable in polynomial time. In this paper, larger classes of tractable instances are singled out, by discussing solution approaches based on exploiting hypergraph acyclicity and, more generally, structural decomposition methods, such as (hyper)tree decompositions.
Mechanism design is addressed in the context of fair allocations of indivisible goods with monetary compensation. Motivated by a real-world social choice problem, mechanisms with verification are considered in a setting where (i) agents declarations on allocated goods can be fully verified before payments are performed, and where (ii) verification is not used to punish agents whose declarations resulted in incorrect ones. Within this setting, a mechanism is designed that is shown to be truthful, efficient, and budget-balanced, and where agents utilities are fairly determined by the Shapley value of suitable coalitional games. The proposed mechanism is however shown to be #P-complete. Thus, to deal with applications with many agents involved, two polynomial-time randomized variants are also proposed: one that is still truthful and efficient, and which is approximately budget-balanced with high probability, and another one that is truthful in expectation, while still budget-balanced and efficient.
We investigate the complexity of bounding the uncertainty of graphical games, and we provide new insight into the intrinsic difficulty of computing Nash equilibria. In particular, we show that, if one adds very simple and natural additional requireme nts to a graphical game, the existence of Nash equilibria is no longer guaranteed, and computing an equilibrium is an intractable problem. Moreover, if stronger equilibrium conditions are required for the game, we get hardness results for the second level of the polynomial hierarchy. Our results offer a clear picture of the complexity of mixed Nash equilibria in graphical games, and answer some open research questions posed by Conitzer and Sandholm (2003).
Evaluating conjunctive queries and solving constraint satisfaction problems are fundamental problems in database theory and artificial intelligence, respectively. These problems are NP-hard, so that several research efforts have been made in the lite rature for identifying tractable classes, known as islands of tractability, as well as for devising clever heuristics for solving efficiently real-world instances. Many heuristic approaches are based on enforcing on the given instance a property called local consistency, where (in database terms) each tuple in every query atom matches at least one tuple in every other query atom. Interestingly, it turns out that, for many well-known classes of queries, such as for the acyclic queries, enforcing local consistency is even sufficient to solve the given instance correctly. However, the precise power of such a procedure was unclear, but for some very restricted cases. The paper provides full answers to the long-standing questions about the precise power of algorithms based on enforcing local consistency. The classes of instances where enforcing local consistency turns out to be a correct query-answering procedure are however not efficiently recognizable. In fact, the paper finally focuses on certain subclasses defined in terms of the novel notion of greedy tree projections. These latter classes are shown to be efficiently recognizable and strictly larger than most islands of tractability known so far, both in the general case of tree projections and for specific structural decomposition methods.
In this paper, a new technique for the optimization of (partially) bound queries over disjunctive Datalog programs with stratified negation is presented. The technique exploits the propagation of query bindings and extends the Magic Set (MS) optimiza tion technique. An important feature of disjunctive Datalog is nonmonotonicity, which calls for nondeterministic implementations, such as backtracking search. A distinguishing characteristic of the new method is that the optimization can be exploited also during the nondeterministic phase. In particular, after some assumptions have been made during the computation, parts of the program may become irrelevant to a query under these assumptions. This allows for dynamic pruning of the search space. In contrast, the effect of the previously defined MS methods for disjunctive Datalog is limited to the deterministic portion of the process. In this way, the potential performance gain by using the proposed method can be exponential, as could be observed empirically. The correctness of MS is established thanks to a strong relationship between MS and unfounded sets that has not been studied in the literature before. This knowledge allows for extending the method also to programs with stratified negation in a natural way. The proposed method has been implemented in DLV and various experiments have been conducted. Experimental results on synthetic data confirm the utility of MS for disjunctive Datalog, and they highlight the computational gain that may be obtained by the new method w.r.t. the previously proposed MS methods for disjunctive Datalog programs. Further experiments on real-world data show the benefits of MS within an application scenario that has received considerable attention in recent years, the problem of answering user queries over possibly inconsistent databases originating from integration of autonomous sources of information.
Coalitional games are mathematical models suited to analyze scenarios where players can collaborate by forming coalitions in order to obtain higher worths than by acting in isolation. A fundamental problem for coalitional games is to single out the m ost desirable outcomes in terms of appropriate notions of worth distributions, which are usually called solution concepts. Motivated by the fact that decisions taken by realistic players cannot involve unbounded resources, recent computer science literature reconsidered the definition of such concepts by advocating the relevance of assessing the amount of resources needed for their computation in terms of their computational complexity. By following this avenue of research, the paper provides a complete picture of the complexity issues arising with three prominent solution concepts for coalitional games with transferable utility, namely, the core, the kernel, and the bargaining set, whenever the game worth-function is represented in some reasonable compact form (otherwise, if the worths of all coalitions are explicitly listed, the input sizes are so large that complexity problems are---artificially---trivial). The starting investigation point is the setting of graph games, about which various open questions were stated in the literature. The paper gives an answer to these questions, and in addition provides new insights on the setting, by characterizing the computational complexity of the three concepts in some relevant generalizations and specializations.
The problem of deciding whether CSP instances admit solutions has been deeply studied in the literature, and several structural tractability results have been derived so far. However, constraint satisfaction comes in practice as a computation problem where the focus is either on finding one solution, or on enumerating all solutions, possibly projected to some given set of output variables. The paper investigates the structural tractability of the problem of enumerating (possibly projected) solutions, where tractability means here computable with polynomial delay (WPD), since in general exponentially many solutions may be computed. A general framework based on the notion of tree projection of hypergraphs is considered, which generalizes all known decomposition methods. Tractability results have been obtained both for classes of structures where output variables are part of their specification, and for classes of structures where computability WPD must be ensured for any possible set of output variables. These results are shown to be tight, by exhibiting dichotomies for classes of structures having bounded arity and where the tree decomposition method is considered.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا