Do you want to publish a course? Click here

Shapley Meets Shapley

236   0   0.0 ( 0 )
 Added by Haris Aziz
 Publication date 2013
and research's language is English




Ask ChatGPT about the research

This paper concerns the analysis of the Shapley value in matching games. Matching games constitute a fundamental class of cooperative games which help understand and model auctions and assignments. In a matching game, the value of a coalition of vertices is the weight of the maximum size matching in the subgraph induced by the coalition. The Shapley value is one of the most important solution concepts in cooperative game theory. After establishing some general insights, we show that the Shapley value of matching games can be computed in polynomial time for some special cases: graphs with maximum degree two, and graphs that have a small modular decomposition into cliques or cocliques (complete k-partite graphs are a notable special case of this). The latter result extends to various other well-known classes of graph-based cooperative games. We continue by showing that computing the Shapley value of unweighted matching games is #P-complete in general. Finally, a fully polynomial-time randomized approximation scheme (FPRAS) is presented. This FPRAS can be considered the best positive result conceivable, in view of the #P-completeness result.



rate research

Read More

The attribution problem, that is the problem of attributing a models prediction to its base features, is well-studied. We extend the notion of attribution to also apply to feature interactions. The Shapley value is a commonly used method to attribute a models prediction to its base features. We propose a generalization of the Shapley value called Shapley-Taylor index that attributes the models prediction to interactions of subsets of features up to some size k. The method is analogous to how the truncated Taylor Series decomposes the function value at a certain point using its derivatives at a different point. In fact, we show that the Shapley Taylor index is equal to the Taylor Series of the multilinear extension of the set-theoretic behavior of the model. We axiomatize this method using the standard Shapley axioms -- linearity, dummy, symmetry and efficiency -- and an additional axiom that we call the interaction distribution axiom. This new axiom explicitly characterizes how interactions are distributed for a class of functions that model pure interaction. We contrast the Shapley-Taylor index against the previously proposed Shapley Interaction index (cf. [9]) from the cooperative game theory literature. We also apply the Shapley Taylor index to three models and identify interesting qualitative insights.
In this paper, we consider permutation manipulations by any subset of women in the Gale-Shapley algorithm. This paper is motivated by the college admissions process in China. Our results also answer Gusfield and Irvings open problem on what can be achieved by permutation manipulations. We present an efficient algorithm to find a strategy profile such that the induced matching is stable and Pareto-optimal while the strategy profile itself is inconspicuous. Surprisingly, we show that such a strategy profile actually forms a Nash equilibrium of the manipulation game. We also show that a strong Nash equilibrium or a super-strong Nash equilibrium does not always exist in general and it is NP-hard to check the existence of these equilibria. We consider an alternative notion of strong Nash equilibria and super-strong Nash equilibrium. Under such notions, we characterize the super-strong Nash equilibrium by Pareto-optimal strategy profiles. In the end, we show that it is NP-complete to find a manipulation that is strictly better for all members of the coalition. This result demonstrates a sharp contrast between weakly better-off outcomes and strictly better-off outcomes.
95 - M. Pinter 2012
We consider Young (1985)s characterization of the Shapley value, and give a new proof of this axiomatization. Moreover, as applications of the new proof, we show that Young (1985)s axiomatization of the Shapley value works on various well-known subclasses of TU games.
Shapley values have become one of the most popular feature attribution explanation methods. However, most prior work has focused on post-hoc Shapley explanations, which can be computationally demanding due to its exponential time complexity and preclude model regularization based on Shapley explanations during training. Thus, we propose to incorporate Shapley values themselves as latent representations in deep models thereby making Shapley explanations first-class citizens in the modeling paradigm. This intrinsic explanation approach enables layer-wise explanations, explanation regularization of the model during training, and fast explanation computation at test time. We define the Shapley transform that transforms the input into a Shapley representation given a specific function. We operationalize the Shapley transform as a neural network module and construct both shallow and deep networks, called ShapNets, by composing Shapley modules. We prove that our Shallow ShapNets compute the exact Shapley values and our Deep ShapNets maintain the missingness and accuracy properties of Shapley values. We demonstrate on synthetic and real-world datasets that our ShapNets enable layer-wise Shapley explanations, novel Shapley regularizations during training, and fast computation while maintaining reasonable performance. Code is available at https://github.com/inouye-lab/ShapleyExplanationNetworks.
This paper addresses Monte Carlo algorithms for calculating the Shapley-Shubik power index in weighted majority games. First, we analyze a naive Monte Carlo algorithm and discuss the required number of samples. We then propose an efficient Monte Carlo algorithm and show that our algorithm reduces the required number of samples as compared to the naive algorithm.
comments
Fetching comments Fetching comments
Sign in to be able to follow your search criteria
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا